Supplement to Rudolf Carnap

G. Logical Syntax of Language

The Logical Syntax of Language appeared in 1934 (the modified English translation in 1937). It is Carnap’s best-known book, though its reception has been tortuous. The main features of the book itself and its reception history are discussed in the main entry (Section 5) on Carnap; the story of Carnap’s path from the Aufbau to the Syntax is described in section 4 of that entry (including the inspirations that Carnap took from Wittgenstein’s work). The literature on the Syntax is enormous and growing, addressing many different aspects of the book. We confine ourselves here to a bird’s-eye overview of the book’s five parts and some of its notable features. Carnap’s famous principle of tolerance in the Logical Syntax and some of its applications to the (meta-)linguistic reconstruction of metaphysical sentences are explained in sections 1 and 2 of the supplement on Tolerance, Metaphysics, and Meta-Ontology, as is Gödel’s criticism of the principle and of Carnap’s syntactical account of mathematics more generally (see the supplement on Tolerance, Metaphysics, and Meta-Ontology (Section 1)). Carnap’s extensionality thesis in the Logical Syntax (§§63–71 of the Logical Syntax), including his treatment of so-called “quasi-syntactical sentences” (§§ 63–64), is discussed in the supplement on Semantics (Section 2).

The stated aim of the Logical Syntax is to show that logic is syntactical, that is, that it consists in formal theories of linguistic symbols. A theory is “formal”, Carnap says when

no reference is made in it to either the meaning of the symbols (for example, the words) or to the sense of the expressions (e.g., the sentences), but simply and solely to the kinds and order of the symbols from which the expressions are constructed. (LSS: 1)

The idea had been suggested by Hilbert’s metamathematics, which Carnap regarded as “the syntax of the mathematical language” (§2), and which Carnap sought to extend to the whole of scientific knowledge—a logical syntax of the language(s) of science. The methods by which he aimed to carry out this program are mainly taken from the metamathematics of mathematical theories, such as Gödel’s arithmetization of the syntax of mathematical theories, which played a central role in Part II of the book. But Carnap reiterates in the book’s final sections that the main purpose of the formal and semi-formal constructions in Logical Syntax is their application to empirical science.

This said, the two main examples of formal languages developed by Carnap in the Logical Syntax are nonetheless motivated by considerations from the foundations of mathematics. Language I is a version of primitive recursive arithmetic, which he takes to exemplify a constructivist-finitist kind of language; while Language II, which contains Language I as a sublanguage, comprises substantial parts of the language of classical mathematics, including real analysis and portions of set theory. Ultimately, Carnap wants to claim that neither the classical logicians/mathematicians nor the intuitionists are “right” about mathematics: no fact of the matter is going to settle their dispute, as both offer possible forms of mathematical languages suitable for particular purposes.

Of the book’s five parts, the first three are taken up with the construction and study of these two languages. Part I present Language I in the same way in which the syntax of formal languages is defined and explained in a modern logic textbook, that is, semi-formally. Part III does the same for Language II. Part II presents Language I in a completely arithmetized, and in this sense, formalized manner; and its metalanguage, which specifies the syntax of Language I, may be formalized itself. The machinery by which syntax may be represented arithmetically had just been introduced by Gödel as a means for proving his celebrated two incompleteness theorems, and Carnap was the first to apply the same technique in a philosophical context. By reconstructing syntactic concepts as arithmetical concepts and syntactic laws as arithmetical laws, much of the syntax of Language I could be developed within Language I itself (though not all of it—see the discussion of derivability below).

Throughout the Syntax, Carnap distinguishes clearly between the object languages that he constructs and studies, and the metalanguage in which he carries out his investigations; and he claims that neglecting the corresponding use-mention distinction has led previous authors astray (LSS: 18, §42). When we speak of “languages” here, we follow Carnap’s usage in the Syntax, but it should be kept in mind that these languages do not merely involve vocabularies and syntactic formation rules but also axioms, definitions, and “rules of transformation” (that is, rules of inference). Hence, in the present sense of these terms, they are really axiomatized theories or “calculi” (§2). (Other authors at the time, e.g., Tarski, used the term “language” in a similar manner.)

Carnap’s so-called “definite” Language I is characterized by the following three features: all of its primitive or defined predicates are decidable; its quantifiers are restricted (as in “for all \(n \lt 7\), it holds that…” or “there is an \(n \lt 7\), such that…”); and its variables are meant to range over natural numbers (that is, variables may be instantiated by numerals). The only way to express unbounded universal quantifiers is by means of free variables, which are regarded as universally bound “from the outside”.

In contrast, the “indefinite” Language II also includes indefinite concepts—open formulas that are not decidable—it allows for unrestricted quantification, and it includes variables of all types recognized by (a version of) the simple theory of types, including higher-order variables for properties, relations, and functions, all of them defined over the first-order domain of natural numbers. Once real numbers have been identified with classes or properties of natural numbers (corresponding, e.g., to the binary expansions of real numbers), Language II may also be thought of as quantifying over real numbers.

We turn now to the logical symbols in both languages, which are the ones that are still common today, that is, propositional connectives and quantifiers. When Carnap speaks of their meaning, he takes their meaning to be determined by their underlying rules of inference. Provisionally, he says, one may also explain the meaning of logical connectives by translating them into natural language or by giving truth tables, though he thinks of these two methods as being less exact (see §5). At the same time, he tries to avoid speaking of the “meaning” of logical symbols at all, restricting himself, to the extent possible, to purely syntactical matters. In contemporary terminology: Carnap adopts a version of proof-theoretic semantics for logical operators (see entry on proof-theoretic semantics for a survey) and hence a version of logical inferentialism (section 1.2 of Schroeder-Heister 2012 [2018]: “Proof-theoretic semantics… belongs to inferentialism”); see Peregrin (forthcoming) for more on Carnap’s inferentialism. In §5 Carnap says about the logical connectives: “In a strictly formally constructed system, the meaning of these symbols… arises out of the rules of transformation” (that is, the rules of inference). When he speaks of meaning here, he does not refer to any of the prevailing philosophical theories of meaning at the time, but meaning as it is understood now in modern proof-theoretic semantics: inferential meaning, “semantics in terms of proof” (Schroeder-Heister 2012 [2018]), such that the inferential rules define the logical connectives (compare Gentzen 1934/1935). Carnap does admit talk of meaning in the Logical Syntax but “the relations of meaning between the sentences are given by means of the rules of consequence” and

in order to determine whether or not one sentence is a consequence of another, no reference need to be made to the meaning of sentences… It is sufficient that the syntactical design of the sentences be given. (LSS: §71)

Accordingly, in later parts of the Logical Syntax, he explains how certain sentences about meaning in the “material mode of speech” can be reconstructed more properly as sentences about syntax that are formulated in the “formal mode of speech” (§75), by which philosophical confusions can be avoided (§78); which is also why, ideally, talk of meaning should in any case be eschewed in favor of talk of syntax. As he argues and tries to work out in some detail in Part IV of the Logical Syntax (to which we will return below), the content of a class of sentences can be defined by the class of non-valid sentences that follow jointly from its members as determined by the rules of transformation, logical and descriptive vocabulary can be distinguished as determined by the rules of transformation, variables can be distinguished from constants as determined by the rules of transformation, and so forth. Even after Carnap made the transition to Tarskian semantics soon after the publication of the Logic Syntax, he retained an inferentialist understanding of scientific theoretical terms: in contrast with observational terms, which he regarded as fully semantically interpreted in a Tarskian manner, Carnap considered theoretical terms as only partially interpreted through their inferential links to each other and to observational terms, where these inferential links were constituted by the deductive structure of the scientific theory by which these theoretical terms are introduced; see supplement on Reconstruction of Scientific Theories for the details. To this day, Carnap’s extremely rich and detailed inferentialist understanding of logic in his Logical Syntax and its ramifications in his later work in the philosophy of science have not been properly acknowledged even by the proof-theoretic semantics community (with the exception of Peregrin forthcoming), although the Logical Syntax predates all of the standard references in proof-theoretic semantics except for Gentzen (1934/1935), which appeared in the same year.

As far as their mathematical vocabularies are concerned, both Language I and Language II include the vocabulary of arithmetic as well as a primitive “least number” operator (the K operator); e.g., ‘\((Kx) 9 \textit{Gr}(x, 7)\)’ would stand for the least natural number less than or equal to 9 that is greater than 7 (and thus denotes 8). All of the primitive arithmetical predicates in the two languages are decidable. In Language II, additional symbols of the calculus are definable which are not decidable. (For all practical purposes, Carnap distinguishes the mathematical symbols from the logical ones, as we would do now, although he does not distinguish them on more systematic grounds in his later Part IV on general syntax, and he remarks in of Part V (LSS: 327) that so far no one has given a precise demarcation. When in §50 he seeks to define an exact distinction between logical terms and descriptive terms, based solely on the rules of inference that one starts from, he counts the mathematical terms amongst the logical ones, not the descriptive ones.)

Over and above their role for mathematics proper, Carnap employs numerals as names for coordinates by which physical locations or the physical objects at these locations can be identified (much like houses may be identified through house numbers); so, number terms replace the standard proper names of ordinary language. (He continued to emphasize the mathematical representatibilty of empirical structures throughout his career; see supplement on Reconstruction of Scientific Theories (Section 4).) This is in line with Carnap’s view of mathematics as being an aid to operating with empirical sentences (p. 11), and with his conviction that linguistic progress is often tied to an increased mathematization of language (p. 12). In addition, both Language I and II are meant to be extended by primitive descriptive predicates and function symbols for empirical properties, relations, and functions, although Carnap leaves the details open. (A first step in this direction came in “Testability and Meaning” (1936–37), discussed in the main entry and in the supplement on Reconstruction of Scientific Theories.) For instance, a quaternary predicate T would be used to formulate the atomic formula ‘\(T(0, 8, 4, 3)\)’ by which one would express that the temperature at position 0 is just as much higher than that at position 8 as the temperature at position 4 is higher than that at position 3 (see LSS: 13).

With their vocabularies and formation rules in place, Carnap’s next step is to formulate the transformation rules—the axioms (or axiom schemata), definitions, and rules of inference—of the two languages. Defined symbols are introduced either by explicit or recursive (“regressive”) definitions. As far as Language I is concerned, these definitions may only involve bounded quantifiers or free variables (with a universal interpretation), while in Language II recursive definitions may always be replaced by (higher-order) explicit definitions (following methods already used by Frege).

In the case of Language I, the transformation rules include axioms and definitions for (classical) propositional logic, the bounded quantifiers, identity, the arithmetical successor symbol, and the K (least number) operator. In Part II of the Logical Syntax, the usual explicit definitions of the numerals and the standard recursive definitions for addition, multiplication, power, and the factorial function are added (still belonging to Language I). The rules of inference concern substitutions as well as complete induction over natural numbers formulated as a rule of inference. Carnap also describes in Part II how the system may be extended by physical axioms and rules (which he calls “P-rules”), and by correlative definitions relating primitive symbols of the language with expressions from physics or everyday language (§25). (Carnap will later replace these correlative definitions by what he would call correspondence rules; see supplement on Reconstruction of Scientific Theories (Section 4).) The P-rules might, e.g., include universal law hypotheses from physics or even empirical observation sentences. But all of the transformation rules of Languages I and II themselves are so-called “L-rules”, that is, “rules of transformation that on a material interpretation can be represented as having a logico-mathematical basis” (LSS: 180).

Language II extends (and partially renders unnecessary) the axioms and rules of Language I by adopting the simple theory of types, the principle (rather than the rule) of complete induction over natural numbers, and numerous additional definitions.

On that basis, Carnap is able to define metalinguistically the usual syntactic notion of derivability for both languages in semi-formal terms, more or less as is now done in logic textbooks; all of the axioms of real analysis, for instance, can be shown to be derivable from the axioms, definitions, and rules of Language II. In Part II, Carnap demonstrates that using arithmetization, predicates such as ‘x is a derivation of y from z’ are even definable in the formally precise arithmetical terms of Language I itself, whereas ‘y is derivable from z’ is not definable in Language I due to the absence of an unbounded existential quantifier (§23). However, the definitions of derivability for both languages may be formalized in the more expressive Language II.

Carnap distinguishes the standard syntactic notion of derivability from the concept of consequence, which we would now regard as a semantic concept, but which Carnap defines for Language I by invoking an additional syntactic though infinitary rule that is now often called the “omega-rule” or even “Carnap’s rule” (Carnap himself ascribes it correctly to Hilbert, §48):

\[\frac{A[0], A[1], A[2],\ldots}{A[x]} \]

That is: from the infinitely many premises \(A[0],\) \(A[1],\) \(A[2]\),… one may derive \(A[x]\). (Since \(A[x]\) is implicitly universally quantified, one effectively derives the universally quantified statement \(\forall xA[x]\)). It follows from the Incompleteness Theorems that if consequence is defined in terms of derivability with the omega-rule, its extension must exceed derivability: indeed, in Part III of the Logical Syntax, Carnap presents a semantic version (concerning arithmetical truth) of the Diagonalization Lemma (§35, which is why he is often credited with discovering the lemma in its general form, of which Gödel had only proved a special instance): he applies it to derive (§36) a semantic version of Gödel’s first Incompleteness Theorem, he discusses the second Incompleteness Theorem, and he draws some general conclusions from them. (For more on Carnap’s back and forth between syntax and semantics in his Logical Syntax see the main entry (Section 5.1) on Carnap.)

In Part I, Carnap defines a sentence of Language I to be analytic just in case it is a consequence of the empty set. The definitions of analyticity and consequence for Language II in Part III (§34–36) are more difficult, because Language II is more complex: effectively, Carnap’s definition of analyticity involves a “quasi-Tarskian” recursive definition of truth (compare Awodey 2007), and the (subsequent) definition of consequence amounts to truth preservation for all possible evaluations based on the first-order domain of (numerals for) natural numbers. Indeed, in §2.1 of Tarski (1936 [2002]), Tarski regards Carnap’s definition of consequence as semantic and acknowledges that

The first attempt at the formulation of a precise definition for the proper concept of following comes from R. Carnap; this attempt however is quite essentially tied to the specific properties of the formalized language which was selected as object of the investigation. (that is, Carnap’s Language II)

And in §2.8 Tarski (1936 [2002]) says about his own model-theoretic definition of logical consequence that “it is not difficult to bring the proposed definition closer to the definition, already known to us, of Carnap” and adds later that “On the basis of these agreements and assumptions it is easy to establish that the two cited definitions are equivalent”, though Tarski’s definition is applicable to a broader range of formalized languages than Carnap’s. Just as truth for a language L cannot be defined along Tarskian lines in L (Tarski 1933 [1935]), Carnap’s definition of consequence for Language II requires a more expressive language than Language II, as Carnap demonstrates in Part III. (Carnap’s notion(s) of analyticity and Quine’s criticism of it are discussed in the supplement on Carnap versus Quine on the Analytic-Synthetic Distinction.)

In part IV of the Syntax, Carnap returns to this distinction between derivability and consequence on more general grounds. The aim here is to sketch a template for “general syntax” by giving a framework for the syntactical development of any language whatsoever. (This is much like §§103–105 in the Aufbau, where Carnap had suggested that there might be general rules of constitution that would guide the construction of definitions in any constitution system whatsoever—but he offered this only as a tentative proposal and labeled the corresponding sections as “may be omitted”.) The idea is to start from a set of transformation rules, i.e., a given set of rules of “direct inference”, on some formal language, and to determine, just on that basis,

the distinction between logical and descriptive symbols, between variables and constants, and further, between logical [L-rules] and extra-logical (physical) transformation rules [P-rules]. (§46)

On the same inferential basis, “d-terms” (derivable, proof, demonstrable, refutable, decidable) would be distinguished from the “c-terms” (consequence, valid, contravalid, determinate, incompatible, content, synonymous)—see §47–48—even though all of them are considered “syntactic”. (Whenever Carnap wants to restrict consequence to logical consequence, excluding the application of P-rules, he speaks of L-consequence.) For the modern reader this raises the question whether “syntactic” (or “formal”) still conforms to Carnap’s initial “no reference is made… to the meaning of the symbols” characterization (§1 of Logical Syntax); after all, as Carnap recognized himself later, syntax in Carnap’s 1934 sense seems to includes much of what we would call “semantics” now. Indeed (see Wagner 2009: 22), Languages I and II are not actually purely “formal” in our modern sense at all—they are interpreted languages. While the syntactic method, as Carnap conceives of it, requires semantic interpretations to be disregarded, a particular fixed interpretation remains in place. Carnap often refers to the import of some syntactic result or attribute “in material interpretation” (“bei inhaltlicher Deutung”) (e.g., §11), and the material interpretation sometimes slips in to play a role in the argument. In §60b he even deals explicitly with the semantic concepts of truth and falsity, he proves the inconsistency of the unrestricted truth scheme in the presence of what we now call a Liar cycle (a sentence A claiming a sentence B to be true, while B claims A to be false), and he sketches an axiomatic theory of truth. (The book was published before Tarski’s work on truth appeared in German, and he does not cite it or rely on it.) But Carnap also rejects this approach, as in his view truth and falsity were not “proper syntactical properties”, and recommends instead that they be replaced or approximated by purely syntactic concepts. Similarly, in §62, he deals with the interpretation of formal languages but then defines interpretation in terms of syntactic translations from one language into another (akin to the modern model-theoretic notion of relative interpretation). This character of the Logical Syntax as a transitional work between syntax and semantics raises many subtle interpretative and historical issues that require distillation of the actual argument from the (often newly-invented) terminology in which it is embedded. Fortunately, a detailed guide to many of these issues is now available in Pierre Wagner’s (2009) handbook on the Logical Syntax.

One feature of Part IV on “General Syntax” that attracted special attention was the attempt, in §50, at a general definition of logicality, irrespective of the particular features of a language, and the concomitant attempt to distinguish “logical” and “descriptive” sentences in a completely general manner. Suppressing some of the details, the basic thought was to define those sentences as logical that only consist of logical symbols, and to define the set of logical symbols to be the intersection of all maximal sets S of symbols that have the property that every sentence A constructed from members of S is either a consequence of the empty set (in which case A is valid) or every sentence is a consequence of A (in which case A is, in the terms defined in §48, “contravalid”). This is meant to reflect that, pre-theoretically,

all the connections between logico-mathematical terms are independent of extra-linguistic factors, such as, for instance, empirical observations, and that they must be solely and completely determined by the transformation rules of the language. (LSS: 177)

Unfortunately, as Mac Lane (1938) showed in his review of the Logical Syntax, Carnap’s definition of logical expressions does not work, since there may not be maximal sets of symbols with the required properties, and even if there are, in some cases their intersection may not deliver the set of symbols that we would have regarded as logical pre-general-syntax. (See also Awodey 2007, 2017; other counterexamples to Carnap’s definition of logicality can be found in Quine 1960b and Creath 1996b: see Bonnay 2009 for a survey and evaluation.) Another consequence of Carnap’s account is that higher-order existence claims couched in purely logical terms count as logical, even when today we would be more inclined to view them as mathematical existence claims. This is consistent with Carnap’s continuing endorsement in §84 of Fregean logicism in the Syntax, though in a different form; the goal of logicism is now formulated as follows:

the task of the logical foundation of mathematics is not achieved solely by a metamathematics, i.e., a syntax of mathematics, but rather by a syntax of the entire language that comprises both logico-mathematical and synthetic sentences.

Carnap himself soon renounced his definitions of logicality, and in his Introduction to Semantics (1942: 247), he says that the most important change to Logical Syntax “concerns the distinction between logical and descriptive signs, and the related distinction between logical and factual truth”. Now (in 1942), he says, he would make these distinctions “primarily in semantics, not in syntax”. (See the supplement on Semantics for more details.)

In Part V of the Logical Syntax, just as in the final part of the Aufbau, Carnap draws philosophical lessons from the more or less technical construction efforts of the earlier chapters. The main burden of this section is to introduce and deploy, in a variety of contexts, the distinction between the “formal” and the “material” modes of speech in the philosophical metalanguage (for which two candidates, Languages I and II, had been developed in sections I–III of the book). Though, as we have seen, the distinction between syntax and semantics was not entirely clear yet in the Logical Syntax, the goal of developing a philosophical or scientific meta-language referring solely to linguistic artifacts in an object language (leaving reference to any actual things or processes in the “world” to the scientific object language itself) is clearly spelled out and developed astonishingly far, considering how recently the Hilbert-Gödel-Tarski efforts at formalization had begun. And corresponding to this goal was the over-arching message of Part V that in order to be scientifically unobjectionable, the “logic of science” (i.e., meta-linguistic discussion of all knowledge, Carnap’s projected replacement for philosophy) had to be purely formal, i.e., in line with Carnap’s inferentialism, could not refer to any objects or processes directly, but only to linguistic artifacts in an object language. Philosophy, that is, insofar as it could aspire to survive this radical transformation, must be restricted to the “formal mode of speech”. Of course Carnap recognized that as a practical matter, the “material mode of speech” (in which objects and processes in the “world” are referred to) was unavoidable, but he goes into some detail to show how, in such cases, translatability into the formal mode of speech remains the critical distinction between usable meta-scientific (philosophical) language in the sense of logical syntax and (what he regards as) metaphysical pseudo-language.

The Wittgensteinian criterion of “meaning” for scientific sentences, so recently made notorious in Carnap’s papers of the early 1930s, especially his critique of Heidegger (Carnap 1932a), was now dropped, therefore, and instead of a demarcation between “meaningful” scientific sentences and “meaningless” metaphysical ones, there was now a distinction between scientifically acceptable sentences in a meta-language that could be translated into purely formal (syntactical) terms and those that could not. (Observation sentences and purely observational generalizations in the scientific object language remained genuinely meaningful in a yet-to-be-determined sense that became the principal bone of contention in the so-called “protocol sentence debate”.) Of course this resolute rejection of “meaning” (in a metalanguage), in any sense that would go beyond syntax, would last only for about a year after the book’s publication, and soon Carnap was speaking of “meaning” in a semantic sense of the term. Many of Carnap’s readers at the time were very confused and disconcerted by this apparent about-turn, and failed to understand that the reintroduction of “meaning” in the purely schematic Tarskian sense that Carnap embraced from about 1935 was not a return to anything, least of all a return to “meaning” in the original sense of Russell’s multiple-relations theory or Wittgenstein’s picture theory of meaning. In the sense of this general motivation, Carnap was not only a kind of inferentialist during his syntax period, as, e.g., Peregrin (forthcoming) suggests, but remained resolutely inferentialist through his entire semantic period—i.e., for the rest of his career—by taking the meaning of expressions in a formally constructed object language to be determined jointly by the syntactic, logical, and semantic rules of a constructed metaframework, and by regarding the choice among such framework for semantics as a practical matter that did not reflect any inflationary commitment to the existence of abstract semantic entities (as explained in his “Empiricism, Semantics and Ontology”, Carnap 1950a, see the supplement on Tolerance, Metaphysics, and Meta-Ontology).

Since Part V was written in 1932 before Carnap had arrived at the principle of tolerance (which is actually announced not in Part V but in the concluding section of Part I), there is almost no mention of tolerance in Part V, which is the part that of course got most of the attention from philosophers. Carnap had devoted a good deal of Part IV (written in 1933) to illustrations and applications of the principle of tolerance, but that was overlooked. Part V was widely perceived to reflect the basic philosophical substance of the book, and since the criterion of translatability into the formal mode—which dominates Part V—was taken back soon after the book’s publication, many thought the book as a whole could now be safely ignored. The deeper and more lasting lesson of the principle of tolerance was missed. When the principle of tolerance came back into wider philosophical discussion after the publication of Gödel’s critique in 1995 (see the supplement on Tolerance, Metaphysics, and Meta-Ontology), it was still quite a new and revolutionary idea (see the supplement on Tolerance, Metaphysics, and Meta-Ontology for more on this).

Copyright © 2020 by
Hannes Leitgeb <Hannes.Leitgeb@lmu.de>
André Carus <awcarus@mac.com>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free