Notes to Computational Philosophy
1. Argument undoubtedly has a history as old as human communication, and the self-conscious consideration of what makes an argument valid cannot have been far behind. The latter is clearly evident in the Sophists and Plato (see Bobzien 2006 ). The first structured and systematized formalization of valid argument, however, is generally credited to Aristotle. The vision of formalization embodied in Aristotle is breathtaking: that properties crucial to the rationality of something as richly human and context-dependent as argument, tied to content and intended for social persuasion, can be captured in a skeleton of icy symbolism that has neither human content nor social context. It is that vision of formalization as capturing the validity of argument—of various forms and regardless of application—that continues through the 2500 years of the development of logic since Aristotle.
2. The combined work of Gödel, Church and Turing in the the 1930s demonstrated the close connection between formal systems, general recursive functions, and Turing’s abstract vision of symbol-processing machines (Gödel 1931, Church 1936, Turing 1936–1937). That abstract vision lay the groundwork for fully concrete symbol-processing machines: our contemporary computers. In computational philosophy, that contemporary computer power is applied to a range of philosophical questions, including complex questions in logic.
3. Leibniz appears to have been inspired in part by the work of Ramon Llull, whose Ars Magna or Ars generalis ultima of 1308 outlines what is intended as a combinatorial tool of argument and analysis in ethics and religion (Lull 1308 ). Different permutations of principles regarding goodness, eternity, will, truth, concordance, and the like are intended to generate and answer questions such as “is eternal goodness concordant?” Lull’s vision traces in turn to the medieval Arabic zairja, a combinatoric algorithm intended to generate truths from a finite number of elements (Gray 2016, Khaldūn 1377 ). In fiction, the image of combinatoric machines continues in Herman Hesse’s The Glass Bead Game, a version set far in the future. The game is described as “a kind of synthesis of human learning” in which elements or themes are combined, with deeper and more varied associations developing as the game progresses (Hesse 1943).
4. Many of the features of the Hegselmann-Krause model are anticipated in Lehrer and Wagner (1981), though there the emphasis is on belief convergence rather than polarization.
5. In a series of linked articles, Alexander Riegler and Igor Douven extend the Hegselmann Krause model in several ways (Riegler & Douven 2009, 2010, Douven & Riegler 2010). They introduce parameters regarding the degree to which an agent’s evidence may be inaccurate, a different weighting for the influence of different agents, and a “space” of interaction in the cellular automata tradition. In a further extension, their agents hold “theories” composed of sets of logically linked beliefs rather than simply single beliefs, and interaction is guided by an “epistemic space” of theory proximity as well. Riegler and Douven find a consistent set of results in these extensions, indicating that increased interaction with other agents helps agents better track the truth in the long run, but that stronger emphasis on individual evidence is faster.
6. The attempt to track the impact of argumentative strategies and cognitive limitations with regard to the exchange of arguments continues. For example, see Singer et al. (forthcoming).
7. Technically, Weisberg and Muldoon’s agents occupy square cells on a grid. Interaction is with their “Moore neighborhood”: the 8 cells touching theirs on each side and corner.
8. In her critique, Thoma (2015) in fact outlines an alternative landscape model which supports claims regarding the epistemic advantage of division of labor. A crucial difference from the Weisberg and Muldoon model is that agents are not limited to information and movement in their immediate theoretical neighborhood.
9. Zollman (2005) also merges the work on communication with the work on game-theoretic cooperation above. Agents on a spatialized grid develop a signaling a system as outlined, but also play Stag Hunt with neighbors, where play in Stag Hunt can be conditional on the signal received. Play in Stag Hunt co-evolves with signaling systems, with the result that all agents end up playing Stag (Skyrms 2010).
10. We leave discussion of computation in the context of epistemic logic to other contexts (Fagin, Halbern, Moses, & Vardi 1995; van Benthem 2006; Rendsvig & Symons 2006 ). In classical form, epistemic logic is an attempt to model belief and knowledge with focus on the individual and at a particular time. The classic source for extension to dynamic logics—in which one considers how an individual’s body of knowledge changes with the addition (expansion) or removal (contraction) of particular beliefs, or the replacement of one belief by another (revision)—is the AGM model, named after developers Carlos Alchourrón, Peter Gärdenfors, and David Makinson (Alchourrón, Gärdenfors, & Makinson 1985; see also van Ditmarsch, van der Hoek, & Kooi 2008; Baltag & Renne 2016 ). Work has also been done on expanding epistemic logic to a form appropriate to groups rather than individuals (Halpern & Moses 1984; Baltag, Boddy and Smetts 2018). A link between that work and the agent-based approach highlighted above would be fascinating, but has not yet been developed.
11. Herbert Simon, interviewed by A. Dale, Pittsburgh, April 21, 1994. Quoted in MacKenzie (1995: 11).
12. For a very different computational approach to logic see Mar and Grim (1991) and St. Denis and Grim (1997), both further developed in Grim, Mar, and St. Denis (1998).
13. For a very different computational approach to philosophy of religion, more akin to agent-based modeling, see Shults (2019).