Stanford Encyclopedia of Philosophy
This is a file in the archives of the Stanford Encyclopedia of Philosophy.

Dynamic Semantics

First published Mon Aug 23, 2010

Dynamic semantics is a perspective on natural language semantics that emphasises the growth of information in time. It is an approach to meaning representation where pieces of text or discourse are viewed as instructions to update an existing context with new information, with an updated context as result. In a slogan: meaning is context change potential. A prime source of inspiration for this dynamic turn is the way in which the semantics of imperative programming languages like C is defined.

It is important to be aware of the abstractness of the perspective, to guard against various non sequiturs. For one thing, one could easily think that dynamic semantics or update semantics is committed, at least in part, to an internalist idea of semantics, since the information states are ‘internal’, in the sense that they are wholly contained in the individual mind or, if you wish, in the individual brain. In other words, one might think that the information states of dynamic semantics are what Putnam (1975) calls ‘states in the sense of methodological solipsism’. See the entries on scientific realism, computational theory of mind, externalism about mental content, and narrow mental content. However, the general framework says nothing about what the states are. The state could very well include the environment in which the receiver is embedded and thus contain an ‘external’ component.

A second possible misunderstanding is that dynamic semantics or update semantics is in complete opposition to classical truth conditional semantics (compare the entries on classical logic and first-order model theory). In fact, as further specification of the format will show, what dynamic semantics provides is a generalization of truth conditional semantics rather than a radically different alternative. The classical meanings become the preconditions for success of the discourse actions. Dynamic semanticists do hold that compositional meanings have the nature of functions or relations, while the classical meanings have the status of projections of the compositional meanings.

The point of the use of an abstract framework is not to give empirical predictions. This is the task of specific realizations inside the framework. The framework of dynamic semantics (i) provides a direction of thinking and (ii) allows us to import methods from the mathematical study of the framework. It follows that the question whether natural language meaning is intrinsically dynamic does not have an empirical answer. Still, what can be said is that the study of interpretation as a temporal process has proven quite fruitful and rewarding.

Since dynamic semantics focuses on the discourse actions of sender and receiver, it is, in a sense, close to use-oriented approaches to meaning in philosophy, such as the work of Wittgenstein and Dummett. However, easy identifications between dynamic semantics and these approaches are to be avoided. Dynamic semantics as an abstract framework is compatible with many philosophical ways of viewing meaning and interpretation. Dynamic semantics aims to model meaning and interpretation. You can do that without answering broader philosophical questions, such as the question what it is that makes it possible for the subject to be related to these meanings at all. E.g., in dynamic predicate logic we take the meaning of horse as given without saying what constitutes the subject's having the concept of horse; we just stipulate the subject has it. This is not to say such questions—which are at the center of the work of Wittgenstein and Dummett—should not ultimately be answered; it is just to say that a model can be of interest even if it does not answer them. Dynamic semantics tries to give a systematic and compositional account of meaning. This makes it markedly different in spirit from Wittgenstein's later philosophy.

1. Meanings in Dynamic Semantics

One approach to dynamic semantics is discourse representation theory, or DRT. This was initiated by Hans Kamp in his paper Kamp 1981. Closely related to Kamp's approach is Irene Heim's file change semantics (see Heim 1983a) and the discourse semantics of Pieter Seuren (see Seuren 1985). Meanings in DRT are a kind of dynamical databases called discourse representation structures or DRSs. Since DRSs can be viewed as rudiments of mental representation, they also have a cognitive appeal (Werth 2000).

In the approach associated to discourse representation theory, meanings are types of things that are used in the process of fitting pieces of information together and contain items that assist in the fitting. One could compare this to the forms of pieces of a jigsaw puzzle: the form of an individual piece is connected to the possible integration of the piece in the puzzle as a whole. A database taken by itself is a static object: it is the result of an information gathering activity. However as soon as one tries to give a compositional semantics for DRT, it becomes clear that the true semantic objects have a dynamic side: they contain instructions for merging the databases. Discourse representation theory has a separate entry in the Stanford Encyclopedia; here we will concentrate on what it has in common with a second approach that takes meanings to be resetting actions or update functions.

In this second approach to dynamic semantics, associated with dynamic predicate logic (Groenendijk and Stokhof 1991a), the dynamic meanings are types of actions, things that are individuated by the changes they effect. The basic idea of the relational/update approach in dynamic semantics is that a meaning should be considered as the action type of an action that modifies the receiver's information state. Some basic work in the dynamic predicate logic, or DPL, tradition is to be found in Groenendijk and Stokhof 1991a, Groenendijk and Stokhof 1991b, Muskens 1991, Dekker 1993, Vermeulen 1993a, Eijck 1994, Vermeulen 1994, Groeneveld 1995, Krahmer 1995, Berg 1996, Groenendijk et al. 1996, Hollenberg and Vermeulen 1996, Aloni 1997, Beaver 1997, and Muskens et al. 1997. A closely related approach is update semantics (see Veltman 1991). A unification of DPL and update semantics is given in Groenendijk et al. 1996.

The varieties of dynamic semantics have led to a modification and extension of the model-theoretic approach to natural language semantics in the style of Richard Montague (1973; 1974a; 1974b; compare the entry on logical form). This new version of Montague grammar is called dynamic Montague grammar (Groenendijk and Stokhof 1990, Muskens 1996, and see below).

2. Context

Semanticists may mean various things when they talk about context (compare the entries on epistemic contextualism and indexicals), and these different views engender varieties of dynamic semantics, and sometimes interesting combinations. There has been a variety of concerns: constructing an appropriate mechanism for pronominal reference (compare the entries on anaphora and reference), explaining the semantics of conditionals (compare the entries on conditionals and the logic of conditionals), giving a semantic treatment of the distinction between assertion and presupposition (compare the entries on assertion, speech acts, implicature, pragmatics) and developing a theory of ‘presupposition projection’, explaining how the interpretation of discourse is influenced and guided by the common ground that exists between speaker and hearer, and developing a theory of how this common ground develops as the discourse proceeds (compare the entries on pragmatics and implicature).

Context plays a role in two distinct oppositions. The first opposition is the duality between context and that which modifies the context. Here the context is the information state, or, say, a suitable abstraction from the information state (compare the entry on semantic conceptions of information). The context modifier is the information received. The information cannot be received without the correct kind of presupposed information state. The proper analogues in predicate logic (compare the entries on classical logic and first-order model theory) are as follows. The information state is an assignment (environment) or a set of assignments. The information received is a set of assignments. The second opposition is the duality of context and content. Here the context is something like the storage capacity of the receiver. The content is the information stored. Thus, e.g., the context in this sense could be a set of discourse referents or files. The content would then be some set of assignments or, perhaps, world/assignment pairs on these referents.

Here is an example to illustrate the distinction. Suppose we view an information state as a pair of a finite set of discourse referents and a set of world/assignment pairs, where the assignments have as domain the given set of referents. Such a state would be a context-in-the-first-sense and the set of referents would be a context-in-the-second-sense. One basic kind of update would be update of content: here we constrain the set of world/assignment pairs, and leave the set of referents constant. A second basic kind of update would be extension of the set of referents: we extend our storage capacity. We modify the given world/assignments pairs to pairs of worlds and extended assignments, where our extended assignments are constrained by the old ones, but take all possible values on the new referents. Thus, the update process in our example is two-dimensional: we have both update of content and update of context-in-the-second-sense.

3. Interpretation as a Process

Interpretation of declarative sentences can be viewed as a product or as a process. In the product perspective, one focusses on the notion of truth in a given situation. In the process perspective, interpretation of a proposition is viewed as an information updating step that allows us to replace a given state of knowledge by a new, more accurate knowledge state. Dynamic semantics focuses on interpretation as a process.

3.1 Propositional Logic as an Update Logic

Propositional logic (the logic of negation, disjunction and conjunction) can be viewed as an update logic, as follows. Consider the case where we have three basic propositions p, q and r, and we know nothing about their truth. Then there are eight possibilities: { PQR, PQR, PQR, PQR, PQR, PQR, PQR, PQR } Here PQR should be read as: none of P, Q, R is true, PQR as: P is true but Q and R are false, and so on. If now ¬P (‘not P’) is announced, four of these disappear, and we are left with { PQR, PQR, PQR, PQR }. If next Q ∨ ¬R (‘Q or not R’) is announced, the possibility PQR gets ruled out, and we are left with { PQR, PQR, PQR }. And so on. We can view the meaning of propositions like ¬P and Q ∨ ¬R as maps from sets of possibilities to subsets of these.

Sets of possibilities represent states of knowledge. In the example, { PQR, PQR, PQR, PQR, PQR, PQR, PQR, PQR } represents the state of complete ignorance about propositions P, Q, R. Singleton sets like { PQR } represent states of full knowledge about the propositions, and the empty set ∅ represents the inconsistent state that results from processing incompatible statements about P, Q and R. Here are the dynamic meanings spelled out of the statements of our propositional language:

This gives the meanings of the propositional connectives, as operations from an old context representing a state of knowledge to a new context representing the state of knowledge that results from processing the propositional information.

As a concrete example, consider the game of Mastermind, played with three positions and the four colors red, green, blue, and yellow (see also Benthem 1996). You are trying to guess a secret code, which is expressed as a sequence of three colours, so the code is one of:


Suppose the secret code is ‘red, blue, yellow’. Since you do not know this, none of the above 24 possibilities is ruled out yet for you. Assume your initial guess is ‘red, green, blue’.

Now the feedback you will get, according to the rules of the game, is in the form of propositional information. Black marks indicate correct colours in correct positions, grey marks indicate correct colours in wrong positions. So the feedback you will get is one black mark (red is in the correct position) and one grey mark (blue is in the wrong position). The propositional formula b1g1 (‘one black and one grey’) expresses this. This rules out all but six possibilities, for you reason as follows: if red is in correct position, then the code is either red blue yellow (RBY) or red yellow green (RYG); if green is in correct position, then the code is either BGR or YGR; if blue is in correct position then the code is either YRB or GYB. What this means is that you interpret the feedback b1g1 as a map from the set of all positions to the set


Suppose your next guess is ‘yellow, green, red’. The feedback now is g2, for red and yellow occur, but in incorrect positions. This rules out any of the six possibilities with a colour in the same position as in the guess, in other words, it rules out


and you are left with the set of possibilities {RBY, GYB}. Your final guess ‘green yellow blue’ solicits the feedback g2, for blue and yellow occur, but in wrong positions, and your final update yields {RBY}. The fact that you are left with a singleton set indicates that you now know the secret code.

3.2 Programming Statements and their Execution

Programming statements of imperative languages like C are interpreted (or ‘executed’) in the context of a machine state, where machine states can be viewed as allocations of values to registers. Assume the registers are named by variables x, y, z, and that the contents of the registers are natural numbers. Then the following is a machine state:

x 12
y 117
z 3
If the C statement z = x is executed (or ‘interpreted’) in this state, the result is a new machine state:
x 12
y 117
z 12
If the sequence of statements x = y; y = z is executed in this state, the result is:
x 117
y 12
z 12
This illustrates that the result of the sequence z = x; x = y; y = z is that the values of x and y are swapped, with the side effect that the old value of z gets lost. In other words, the meaning of the program z = x; x = y; y = z can be viewed as as map from an input machine state s to an output machine state s′ that differs from s in the fact that s(x) (the value of x in state s) and s(y) are swapped, and that s′(z) (the new value of z) equals s(y).

3.3 Quantifiers as Programs

Now consider the existential quantifier ‘there exists x such that A’. Suppose we add this quantifier to a programming language like C. What would be its meaning? It would be an instruction to replace the old value of x by some arbitrary new value, where the new value has property A. We can decompose this into a part ‘there exists x’ and a test ‘A’, where the parts are glued together by sequential composition: ‘∃x;A’. Focusing on the part ‘∃x’, what would be its natural meaning? An instruction to replace the old value of x by some arbitrary new value. This is again a relation between input states and output states, but the difference with definite assignments like x = y is that now the relation is not a function. In fact, this relational meaning of quantifiers shows up in the well known Tarski-style truth definition for first order logic (compare the entry on Tarski's truth definitions):

xφ is true in a model M, under a variable assignment α iff (if and only if) there is some β with β different from α at most in the value that gets assigned to x, and φ is true in M under assignment β.

Implicit in this is a relation [x] that holds between α and β iff for all variables y it is the case that yx implies α(y) = β(y).

4. Dynamic Predicate Logic

The scope conventions of ordinary predicate logic (compare the entry on first-order model theory) are such that the scope of a quantifier is always confined to the formula in which it occurs as its main connective. In other words, in a parse tree of any formula, an occurrence of a quantifier Qx at a node ν will only bind occurrences of x occurring below ν in the tree. Now consider the following discourse:

A man comes in. He sees a dog. He smiles.

How would we paraphrase this discourse in predicate logic? Well, we could translate A man comes in

(1)     ∃x  (man(x) ∧ comes-in(x)).

Then the obvious move would be to translate A man comes in. He sees a dog into:

(2)    ∃x  (man( x) ∧ comes-in(x)) ∧ ∃y  (dog(y) ∧ sees(x, y)).

However, this cannot be correct, since the second occurrence of x is not bound by the occurrence of ∃x in this formula. The right translation is:

(3)    ∃x  (man( x) ∧comes-in(x) ∧ ∃y  (dog(y) ∧ sees(x, y))).

So far, so good. There is nothing wrong, per se, with (3). It just that, unlike (2), it is not produced in a compositional way from (1), since rearranging the brackets is not an operation that can be reflected at the semantical level (compare the entry on compositionality). Similarly, if we want to translate our full sample discourse we have to ‘break open’ our previous results again to produce:

(4)   x  ( man(x)∧comes-in(x) ∧ ∃y  (dog(y)∧sees( x, y)) ∧

Thus, if we think of translation of discourses as a process, we cannot, in general, produce intermediate translations: we are forced to translate the discourse as a whole. For another example of this, consider Geach's donkey sentence (compare the entry on anaphora):

If a farmer owns a donkey, he beats it.

The obvious translation into predicate logic would be:

(∃x  (farmer(x) ∧ ∃y  (donkey(y) ∧ owns(x, y))) → beats(x, y)).

However, again the last occurrences of x and y will be free in this paraphrase. So the paraphrase will not capture the intended meaning. The correct translation would be something like:

x  (farmer(x) → ∀y  ((donkey(y) ∧ owns(x, y)) → beats(x, y))).

This last translation is clearly not obtained in a compositional way from Geach's donkey sentence (see Geach 1962 (Third revised edition: 1980), and the entry on anaphora).

Dynamic predicate logic (DPL) was invented by Jeroen Groenendijk and Martin Stokhof (1991a) to make compositional translation of examples such as the ones above possible. It is the simplest possible variant of predicate logic in which we have existential quantifiers with extendible scope. The universal quantifiers on the other hand are the familiar ones of predicate logic. DPL is a theory of testing and resetting of variables/registers. These are fundamental operations in computer science. Thus, apart from its use in logical semantics, DPL is a simple theory of these basic operations.

To understand the basic idea behind DPL, look at the example A man comes in. He sees a dog again. Why is it the case that in ordinary predicate logic we cannot take the meaning associated with A man comes in and combine it with the meaning of He sees a dog to obtain the meaning of A man comes in. He sees a dog? Well, the meaning of A man comes in is a set of assignments. Suppose, e.g., there is a man in the domain of discourse entering some specified place. Then, a man comes in would be true. Its meaning would be the set of all assignments on some given set of variables. There is no way to get at the object or objects introduced by the sentence, just by looking at this set. It could also be the meaning of A dog sees a cat. What we need is an alternative meaning that ‘remembers’ and ‘produces’ the man introduced in the discourse.

We get our clue on how to do this from staring at the definition of existential quantification in ordinary predicate logic. Suppose we work with total assignments on a fixed set of variables VAR over a fixed domain D. Let the meaning of P(x) be the set of assignments F. Thus, F is the set of all assignments α with property that αx is an object satisfying P.


α[x]β :⇔ ∀vVAR \ {x}  αv = αv.

So [x] is the relation ‘β is a result of resetting α on x’. Now the meaning, say G, of ∃x  P(x), will be:

G := {α ∈ ASS | ∃β ∈ F  α[x]β}.

Thus, G is the set of assignments that can be successfully reset w.r.t. x to an assignment in F. Viewed differently G is the domain of the relation R given by

αRβ :⇔ α[x]β and β ∈ F.

We could say that G is the precondition for the resetting action R. Now the idea of DPL is to take as the meaning of ∃x P(x) not the precondition G but the resetting action R. In this way we do not lose information since G can always be obtained from R. Moreover, the range elements β of R are constrained to be in F and have x-values in the interpretation of P. These are precisely the x's that do P, that we were looking for.

More generally, we take as DPL-meanings binary relations between assignments. Such relations can be seen as (modelings of) resetting actions. This is an instance of a well-known, admittedly simplistic way, of modeling actions: an action is viewed as a relation between the states of the world before the action and the corresponding states after the action.

Here is the full definition. Let a non-empty domain D and a set of variables VAR be given. Let a model M=⟨D, I⟩ of signature Σ be given. Atomic conditions π are of the form P(x0, …, xn−1), where P is in Σ of arity n. Atomic resets ε are of the form ∃v, where v is a variable. The language of predicate logic for Σ is given by:

φ ::= ⊥ | ⊤ | π | ε | φ · φ | ~(φ).

Assignments are elements α, β, …, of ASS := DVAR. We define the relational meaning of the language, as follows:

Truth is defined in terms of relational meaning:

α ⊨ φ :⇔ ∃β  α[φ]β.

We can define implication φ → ψ as ~(φ · ~ψ). Applying the truth definition to this gives:

α ⊨ φ → ψ :⇔ ∀β  (α[φ]β ⇒ β ⊨ ψ).

Relational meaning also yields the following beautiful definition of dynamic implication:

φ ⊨ ψ :⇔ ∀α, β  (α[φ]β ⇒ ∃γ  β[ψ]γ).

This definition was first introduced by Hans Kamp in his pioneering paper Kamp 1981. Note that ~φ is equivalent to (φ → ⊥), and that (φ → ψ) is true iff φ ⊨ ψ. We can define ∀x  (φ) as (∃x → φ).

A possible alternative notation for ∃v would be [v := ?] (random reset). This emphasizes the connection with random assignment in programming.

The interpretations of predicate symbols are conditions. They are subsets of the diagonal {⟨α, α⟩ | α ∈ ASS}. A condition is a test: it passes on what is OK and throws away what is not OK, but it modifies nothing. The mapping diag that sends a set F of assignments to a condition {⟨α, α⟩ | α ∈ F} is the link between the classical and the dynamic world. E.g. the composition of the diagonals of F and G is the diagonal of their intersection.

Ordinary Predicate Logic can be interpreted in DPL as follows. We suppose that the predicate logical language has as connectives and quantifiers: ⊤, ⊥, ∧, →, ∃x. We translate as follows:

We get that [φ] is the diagonal of the classical interpretation of φ. Our translation is compositional. It shows that we may consider Predicate Logic as a fragment of DPL.

It is, conversely, possible to translate any DPL-formula φ to a predicate logical formula φ°, such that the domain of [φ] is the classical interpretation of φ°. One of the ways to define this translation is by means of a precondition calculus, with Floyd-Hoare rules (Eijck and de Vries 1992). The following is a variation on this. Take the language of standard predicate logic, with diamond modalities ⟨ψ⟩φ added, where ψ ranges over DPL formulas, with meaning α ⊨ ⟨ψ⟩φ if there is an assignment β with α[ψ]β, and β ⊨ φ. Then the following equivalences show that this extension does not increase expressive power.

So in a weak sense ‘nothing new happens in DPL’. We cannot define a set that we cannot also define in Predicate Logic.

The equivalences for the modalities fix a translation ( · )° that yields the weakest precondition for achieving a given postcondition. As an example, we compute ⟨ψ⟩⊤, where ψ is the Geach sentence (the weakest precondition for success of the Geach sentence):

(⟨(∃x · Fx · ∃y · Dy · Hxy) → Bxy⟩⊤)°
⇔ (⟨~((∃x · Fx · ∃y · Dy · Hxy) · ~Bxy) ⟩⊤)°
⇔ ¬(⟨(∃x · Fx · ∃y · Dy · Hxy) · ~Bxy ⟩⊤)°
⇔ …
⇔ ¬∃x (Fx ∧ ∃y(DyHxy ∧ ¬Bxy)).
The translation gives the Geach sentence its correct meaning, but it is not compositional: the example illustrates that the way in which the existential quantifier gets handled depends on whether it is in the scope of ~.

5. Update Semantics, the Very Idea

Update semantics is a theory of meaning based on a very simple idea. We start with a simple model of a hearer / receiver who receives items of information sequentially. At every moment the hearer is in a certain state: she possesses certain information. This state is modified by the incoming information in a systematic way. We now analyze the meaning of the incoming items as their contribution to the change of the information state of the receiver. Thus, the meaning is seen as an action.

Note that the meanings are are in fact action types. They are not the concrete changes of some given state into another, but what such concrete changes have in common.

The most abstract mathematical format for update semantics is as a transition system. We have a set of states and a set of labeled transitions between these states. The meaning of a given item is modeled by the relation corresponding to a label that is assigned to the item. Here are two examples of labeled transitions systems. The system A a B b C is a functional transition system. The system D  a A a B b C is non-functional. Note the abstractness of the notion of state. Nothing has been said about what states are, and this lack of commitment is intentional. Information states are often called contexts, since the state is a precondition for the ‘interpretation’ of the informational item. Also the use of the word ‘context’ makes it clear that we are not interested in the total state of the receiver but only in aspects of it relevant to the kind of information we are focussing on. Thus, meanings are often called context change potential in the dynamic tradition.

6. Updates on a Boolean Algebra

A very simple and appealing model is to consider updates on a Boolean algebra (compare the entry on the mathematics of Boolean algebra), or, more concretely, on the power set of a given set of possibilities. Thus, we have certain operations on our states available like conjunction / intersection and negation / complement.

6.1 Semantics for Maybe

One realization of this idea is Frank Veltman's Update Semantics for maybe (Veltman 1991, Veltman 1996). Note that the discourse Maybe it is raining. It is not raining is coherent. However, the discourse It is not raining. Maybe it is raining is not. (We are assuming that the environment about which we receive information does not change during the discourse.) The aim of Veltman's Update Semantics is to analyze this phenomenon of non-commutativity.

The language of update semantics is that of propositional logic, with the possibility operator ‘◊’ added. This operator stands for ‘maybe’. Here is the specification of the language, where ‘p’ ranges over of propositional variables. We prefer a dot over the usual conjunction sign to stress that conjunction means sequential composition in the order of reading.

The interpretation is a simple extension of the ‘update perspective’ on propositional logic sketched above, where the update interpretations of propositional atoms, of negation, of conjunction and of disjunction were given. The update interpretation of ‘maybe’ is given by:

Maybe statements are of the form ◊φ. The corresponding update action on the current context is to check whether an update with φ in the context yields a non-empty set of possibilities. In the affirmative case, the update with ◊φ returns the whole context, otherwise it returns the empty set.
For spelling this out more formally, we fix a Boolean algebra B. Interpretations are functions from B to B. Let α be an assignment from the propositional variables to the domain of B. We define, for s in B.

Definition of truth in an information state:

sα φ :⇔ s ≤ [φ]αs.

Instead of sα φ we also say that φ is accepted in s.

Definition of consequence, relative to an assignment:

ψ ⊨α φ :⇔ ∀s  [ψ]αs ⊨ φ.

Consistency, relative to an assignment:

φ is consistent iff, for some state s, we have [φ]αs ≠ ⊥.

Note that φ is consistent iff φ ⊭α ⊥,

We easily see that sα φ · ψ iff sα φ and sα ψ. So the difference between Maybe it is raining. It is not raining and It is not raining. Maybe it is raining cannot be understood at the level of acceptance: none of the two discourses is ever accepted. However, Maybe it is raining. It is not raining is clearly consistent, where It is not raining. Maybe it is raining is inconsistent.

If we drop the semantics for maybe, Veltman's semantics collapses modulo isomorphism into classical semantics, the relevant mappings being FF⊤ and p → λs · (sp). These mappings spell out the relation between the usual semantics for propositional logic and its update semantics.

6.2 Properties of Update Functions

Here are some important properties of update functions. The first and second ones hold of updates that commute with (possibly infinite) disjunctions. The third one holds of updates that narrow down the range of possibilities.

Note that if a Boolean algebra is finite, then it is automatically complete. Moreover in this case distributive and finitely distributive coincide. Here is an example of an update function that is finitely distributive, but not distributive. Consider the Boolean Algebra of subsets of the natural numbers. Take F(X) :=⊤ if X is infinite and F(X)=⊥ if X is finite. Then, clearly F is finitely distributive, but not distributive.

The update functions of Veltman that can be generated in his system for maybe are eliminative, but not distributive. E.g., suppose ⊥ < s < ⊤ and α(p) = s. Then, [◊p]α(s ∨ ¬s) = ⊤ and ([◊p]α(s) ∨ [◊p]αs)) = (s ∨ ⊥) = s.

We will see that the update functions associated DPL are distributive, but not eliminative, due to the presence of ∃v. If we view eliminativity as an explication of information growth, the non-eliminativity means that DPL contains destructive updates. This is intuitively plausible, since ∃v does indeed throw away previous values stored under v. Full distributivity means that the update functions can be considered as relations.

6.3 Dynamic Predicate Logic in Update Form

In Update Semantics for DPL, we represent information states as sets of assignments and we represent transitions as functions from sets of assignments to sets of assignments.

Distributive update functions and relations over a given domain can be correlated to each other in a quite general way. Let A be a non-empty set. Let R range over binary relations on A and let F range over functions from ℘A to ℘A. We define:

We can show that, if F is distributive, then FRF = F and RFR = R. We can transform the relations of DPL to update functions via the mapping F( · ).

Here is the direct definition. Let a non-empty domain D and a set of variables VAR be given. Let a model M = ⟨D, I⟩ of signature σ be given. Atomic conditions π are of the form P(x0, … ,xn−1), where P is in σ of arity n. Atomic resets ε are of the form ∃v, where v is a variable. We repeat the definition of the language of dynamic predicate logic for σ:

A state is a set of assignments, i.e. of functions VARD. We consider the states as a complete Boolean algebra B with the usual operations.

Formulas φ of predicate logic are interpreted as update functions [φ], i.e. as functions StatesStates. We define:

The truth definition now takes the following shape:

s ⊨ φ :⇔ ∀α ∈ s  [φ]{α} ≠ ∅.

And here is the definition of dynamic implication in its new guise:

φ ⊨ ψ :⇔ ∀s  [φ]s ⊨ ψ.

6.4 Van Benthem's Bottle

Johan van Benthem enclosed the dynamic fly in a static bottle by showing that update semantics on a Boolean algebra collapses into classical semantics if we demand both finite distributivity and eliminativity (Benthem 1989).

This argument seems to show that the non-eliminativity of a relational semantics like DPL is a necessary feature. The cost of distributivity is that we accept destructive updates. After giving the argument we will indicate the way out of the bottle in Subsection 6.5. Here is the argument.

Theorem 1 [van Benthem] Suppose we are given a Boolean algebra B and an update function f over B. Suppose further that f is finitely distributive and eliminative. I.e.,
  • f⊥ = ⊥,  f(ss′) = fsfs′,
  • fss.

Then, we have fs = (sf⊤).


sf = sf(s ∨ ¬s)
= s ∧ (fsfs))
= (sfs) ∨ (sfs))
= fs

The map τ: ff⊤. is a bijection between finitely distributive and eliminative update functions and the elements of our Boolean algebra. Moreover, for finitely distributive and eliminative f and g,

gfs = (fsg⊤) = ((sf⊤) ∧ g⊤) = (s ∧ (f⊤ ∧ g⊤)).

So τ(gf) = τ(f) ∧ τ(g). One can also show that τ commutes with negation. Thus, τ is an isomorphism between finitely distributive and eliminative updates with their intrinsic operations and B.

6.5 Dimensions of Information

We can escape the bottle by treating information as ‘more dimensional’. Consider, e.g., the operation ∃x in DPL. This means intuitively:

Extend the information state with a discourse referent x.

This does not change the content of what the receiver knows in the worldly sense, but it changes the algebra of possible propositions. Thus, it is reasonable to say that this operation takes us to a different algebra.

This suggests the following setting. An update function is given by (i) a canonical embedding E between Boolean algebras B0 and B1 and a mapping f from B0 to B1. Here E tells us which proposition in the new algebra contains the same worldly information as a proposition in the old one. The salient (possible) properties of f now translate to:

In the new context, van Benthem's theorem tells us that, if f is eliminative and finitely distributive, then fs = (Esf0). Thus, the modified result shows that an eliminative and finitely distributive update function can be completely characterized by the pair ⟨E, f0 ⟩. This pair can be more or less considered as the semantic discourse representation structure, or semantic drs, associated with f. So, in a sense, van Benthem's result explains the possibility of Discourse Representation Theory.

Frank Veltman, Jeroen Groenendijk, and Martin Stokhof succeeded in integrating update semantics for maybe and Dynamic Predicate Logic by realizing a two-dimensional semantics, where the elements of the relevant Boolean algebras are sets of assignment/world pairs. The change in the algebras is caused by the extension of the domains of the assignments: introducing a discourse referent enlarges the storage capacity (Groenendijk et al. 1996).

6.6 Introducing a Referent

What happens if we try to introduce a discourse referent, when it is already present? This phenomenon is in fact the source of destructiveness of classical DPL. The imagined situation is deeply unnatural. How can one intend to introduce a new referent and fail at the job? From the technical point of view there are many ways to handle the problem. A first way is to simply forbid the occurrence of such a repeated introduction. This amounts to introducing a constraint on syntax. This way is embodied in versions of Discourse Representation Theory: the drs-construction algorithm always invites us to choose a fresh variable letter when a new discourse referent is needed.

A second way is to store a stack of objects under every variable letter. In this way one obtains versions of Vermeulen's Sequence Semantics (Vermeulen 1993a). One could view Vermeulen's idea either as ‘different objects under one referent’ or as ‘different referents under one variable name’. See below.

The most satisfactory way, is to say that the imagined occurrence of double introduction is an instance of the fallacy of misplaced concreteness. It rest on the confusion of the discourse referent and its label, or, to change the simile, it confuses the file and a pointer to the file. Only the label could already be present. The referent is new ex stipulatione.

Again there are various ways of handling a situation of overloading of labels/pointers. For example, we could say that in case of a clash, the new referent will win and the old referent will loose it label. This gives us Vermeulen's referent systems (Vermeulen 1995). Alternatively we could let the old referent win. This possibility is embodied in Zeevat's compositional version of discourse representation theory (Zeevat 1991). Finally, we could allow referents to share a label. Vermeulen's sequence semantics can be viewed as one way of implementing this idea.

Frank Veltman, Jeroen Groenendijk, and Martin Stokhof in their Groenendijk et al. 1996 use one version of referent systems in their integration of Update Semantics for maybe and Dynamic Predicate Logic.

7. Dynamic Semantics and Presupposition

We have looked at anaphora and at epistemic modalities. Other natural language phenomena with a dynamic flavour are presuppositions, and assumption-introducing expressions like ‘suppose’. This section sketches a dynamic perspective on presuppositions.

7.1 Different Styles

Dynamic logic comes in various flavours, the main distinction being that between single-storey and dual-storey architectures. In the single-storey approach everything is dynamic, and therefore formulas denote relations. In the dual-storey approach there is a level of state changes and a level of states, and it is possible to move back and forth between the two levels. An example of a switch from the state change level to the state level is the postcondition operator, which gives the postcondition of a state change for some initial condition. Another operator is the precondition operator, which gives the (weakest) precondition of a state change for a given postcondition. Below we give an example of a dual-storey approach, with precondition operators. It is not hard to work out a single-storey version or a version with postcondition operators.

Pragmatic accounts of presupposition and presupposition projection were given by Karttunen (Karttunen 1973; Karttunen 1974) and Stalnaker (Stalnaker 1972; Stalnaker 1974). These authors proposed an explanation for the fact that the presupposition of a conjunction φ and ψ consists of the presupposition of φ conjoined with the implication assφ → presψ. When a speaker utters this conjunction, she may take it for granted that the audience knows φ after she has uttered this first conjunct. So even if φ is not presupposed initially, it will be presupposed by the time she gets to assert ψ, for now the context has shifted to encompass φ.

Various authors have tried to make the idea of shifting context precise, most notably Heim (Heim 1983b). Presupposition projection has been a major topic in dynamic semantics of natural language ever since Beaver 2001. Formal accounts of presupposition within the DRT framework (e.g., Sandt 1992, or the textbook treatment based on this in Blackburn and Bos forthcoming) combine the dynamics for setting up appropriate contexts for anaphoric linking with the dynamics for presupposition handling and presupposition accommodation. Although anaphora resolution and presupposition handling have things in common, we will treat them as orthogonal issues. For a dissenting voice on the marriage of dynamic semantics and presupposition handling, see Schlenker 2007.

7.2 Presupposition Failures and Error Transitions

Presupposition failures are errors that occur during the left to right processing of a natural language text. On the assumption that sequential processing changes context dynamically, a dynamic account of presupposition failure models presupposition failure as transition to an error state.

So we postulate the existence of an error state, and we say that a process ‘aborts’ (or: ‘suffers from presupposition failure’) in a given context if it leads to the error state from that context.

Propositional Dynamic Error Logic is a logic of formulas and programs that is interpreted in labeled transition systems over a state set that includes the error state error.

Let P be a set of proposition letters.

φ ::= ⊤ | p | ¬φ | φ1 ∧ φ2 | E(π) | ⟨π⟩φ | [π]φ

π ::= abort | φ? | π1 ; π2 | π1 ∪ π2


:≡ ¬⊤
φ1 ∨ φ2 :≡ ¬(¬φ1 ∧ ¬φ2)
φ1 → φ2 :≡ ¬(φ1 ∧ ¬φ2)

Let M be a pair (S, V) with errorS (the set of states includes the error state) and V: PP(S − {error}) (the valuation assigns a set of proper states to every proposition letter in P). Interpretation of formulas and programs by mutual recursion.

All relational meanings will be subsets of S × S, with two additional properties:

  1. (error, error) is an element of every relational meaning,
  2. (error, s) ∈ R implies s = error.

The composition R ; T of two relations R and T is defined in the usual way:

R ; T := { (s, s′) | ∃s″ ∈ S : (s, s″) ∈ R and (s″, s′) ∈ T}.

Note that it follows from the definition and the properties of R and T that R ; T will also have these properties. In particular, (error, s) ∈ R; T implies s = error. In other words, there is no recovery from error.

In the truth definition we assume that s is a proper state.

M, s ⊨ ⊤ always
M, sp  :≡  sV(p)
M, s ⊨ ¬φ  :≡  not M, s ⊨ φ
M, s ⊨ φ1 ∧ φ2  :≡  M, s ⊨ φ1 and M, s ⊨ φ2
M, sE(π)  :≡  (s, error) ∈ [[π]]M
M, s ⊨ ⟨π⟩φ  :≡  there is an s′ ∈ S − {error}
with (s, s′) ∈ [[π]]M and M, s′ ⊨ φ
[[abort]]M  :≡  {(s, error) | sS}
[[φ?]]M  :≡  {(s, s) | sS − {error} and M, s ⊨ φ} ∪ {(error, error) }
[[π1 ; π2]]M  :≡  [[π1]]M ; [[π2]]M
[[π1 ∪ π2]]M  :≡  [[π1]]M ∪ [[π2]]M.

This language has an obvious axiomatisation: the axioms of propositional logic, an axiom stating the relation between program diamonds and boxes,

⟨π⟩¬φ ↔ ¬[π]φ

the distribution axiom for programs

[π](φ1 → φ2) → [π]φ1 → [π]φ2

the reduction axioms for sequential composition, choice and test:

⟨π1 ; π2⟩φ ↔ ⟨π1⟩⟨π2⟩φ
⟨π1∪π2⟩φ ↔ ⟨π1⟩φ ∨ ⟨π2⟩φ
⟨φ1?⟩φ2 ↔ φ1 ∧ φ2

reduction axioms for error behaviour of composition, choice and test:

E1 ; π2) E1) ∨ ⟨π1E2)
E1 ∪ π2) E1) ∨ E2)
E(φ?) ↔ ⊥

the axiom stating that abort leads to error and to no other state,

E(abort) ∧ [abort]⊥

and the rules of inference modus ponens (from φ1 and φ1 → φ2 infer φ2) and program necessitation (from φ infer [π]φ).

Let us see now how this applies to presupposition projection, Given a pair of formulas consisting of a presupposition pres and an assertion ass, the general recipe of forging a program from this is by means of

pres? ; abort) ∪ ass?

This uses the toolset of dynamic logic (compare the entry on propositional dynamic logic) to enforce the desired behaviour: if the presupposition is not fulfilled the program aborts and otherwise the program behaves as a test for the assertion.

Apply this to the case of being a bachelor. The presupposition is being male and being adult (say ma), and the assertion is being unmarried, for which we use n. According to the recipe above the program bachelor is defined as

(¬(ma)? ; abort) ∪ n?.

Similarly, being male has presupposition ⊤ and assertion m, so the program male is defined as ⊥? ; abortm?, which reduces to m?. What this says is that male is a program without presupposition (the program never aborts), whereas bachelor does have a presupposition (the program aborts if the test ma fails).

To get the assertion back from a program π, we can use


which gives the conditions under which the program has at least one transition that does not lead to error. Here is a proof, for a program π of the form (¬pres? ; abort) ∪ ass?:

⟨(¬pres? ; abort) ∪ ass? ⟩⊤  ≡  ⟨¬pres? ; abort⟩⊤ ∨ ⟨ass?⟩⊤
 ≡  ⟨¬pres?⟩⟨abort⟩⊤ ∨ ass
 ≡  ⟨¬pres?⟩⊥ ∨ ass
 ≡  (¬pres ∧ ⊥) ∨ ass
 ≡ ass.

To get the presupposition, we can use


which gives the conditions under which the program will not have a transition to error. Here is a proof:

E((¬pres? ; abort) ∪ ass?)  ≡  Epres? ; abort) ∨ E(ass?)
 ≡  Epres?) ∨ ⟨¬pres?⟩abort) ∨ ⊥
 ≡  ⊥ ∨ ⟨¬pres?⟩E(abort) ∨ ⊥
 ≡  ⟨¬pres?⟩E(abort)
 ≡  ¬pres ∧ ⊤
 ≡  ¬pres.

It follows that ¬E((¬pres? ; abort) ∪ ass?) ≡ pres.

Now consider the composition male; bachelor, the result of first uttering male, and next bachelor. The assertion of this composed utterance is:

male ; bachelor⟩⊤  ≡  ⟨male⟩⟨bachelor⟩⊤
 ≡  ⟨malen
 ≡  mn.

Its presupposition is

¬E(male ; bachelor)  ≡  ¬E(male) ∧ ¬⟨maleE(bachelor)
 ≡  ⊤ ∧[maleE(bachelor)
 ≡  m → (ma)
 ≡  ma.
program π1 ; π2
assertion assπ1 ∧ assπ2
presupposition presπ1 ∧ (assπ1 → presπ2)
Figure 1: Projection table for sequential composition.

The problem of presupposition projection, for our language of programs with error abortion, is to express the presupposition of π in terms of presuppositions and assertions of its components. For that, it is useful to define assπ as ⟨π⟩⊤ (or, equivalently, as ¬[π]⊥), and presπ as ¬E(π).

It is a simple logical exercise to express the assertion and presupposition of π1 ; π2 in terms of the assertions and presuppositions of its components. The result is in Table 1.

program not π
assertion ¬assπ
presupposition presπ
Figure 2: Projection table for negation.

What does it mean to negate a program π? The most straightforward requirement is let not π be a test that succeeds if π fails, and that aborts with error if π aborts. The following definition of not π implements this:

not π  :≡  (E(π)? ; abort) ∪ ([π]⊥)?

Using this to work out the meaning of not bachelor, we get:

not bachelor  ≡  (E(bachelor)? ; abort) ∪ ([bachelor]⊥)?
 ≡  (¬(ma)? ; abort) ∪ ([bachelor]⊥)?
 ≡  (¬(ma)? ; abort) ∪¬n?

Again, it is a simple logical exercise to express the assertion and presupposition of not π in terms of assertion and presupposition of its component π. See Table 2.

program π1 ⇒ π2
assertion assπ1 → assπ2
presupposition presπ1 ∧ (assπ1 → presπ2
Figure 3: Projection table for implication.

The implication of π1 ⇒ π2 has as natural definition not1 ; not π2), and its projection behaviour can be worked out from this definition.

⟨π1 ⇒ π2⟩⊤  ≡  ⟨not1 ; not π2) ⟩⊤
 ≡  ¬⟨π1 ; not π2⟩⊤
 ≡  ¬⟨π1⟩⟨not π2⟩⊤
 ≡  ¬⟨π1⟩¬⟨π2⟩⊤
 ≡  [π1]⟨π2⟩⊤
 ≡  [π1]assπ2
 ≡  [¬presπ1? ; abortassπ1?] assπ2
 ≡  [¬presπ1? ; abort]assπ2 ∧ [assπ1?] assπ2
 ≡  [¬presπ1?][abort]ass π2 ∧ [assπ1?] assπ2
 ≡  ¬presπ1 → ⊤ ∧ [assπ1?]assπ2
 ≡  [assπ1?]assπ2
 ≡  assπ1 → assπ2

The calculation of presupposition failure conditions:

E1 ⇒ π2)  ≡  E(not1 ; not π2))
 ≡  E1 ; not π2)
 ≡  E1) ∨ ⟨π1E(not π2)
 ≡  E1) ∨ ⟨π1E2)
 ≡  ¬presπ1 ∨ (assπ1 ∧ ¬presπ2)

It follows that pres1 ⇒ π2) ≡ ¬E1 ⇒ π2) ≡ presπ1 ∧ (assπ1 → presπ2). Table 3 gives the projection table for implication.

Applying this to the example of bachelorhood we get, using the facts that assmalem, presmale ≡ ⊤, assbachelorn and presbachelorma:

malebachelor  ≡  not (male ; not bachelor)
 ≡  (m → ¬(ma)? ; abort ∪ (mn)?
 ≡  (m → ¬a)? ; abort ∪ (mn)?
program π1 or π2
assertion assπ1 ∨ (presπ1 ∧ assπ2
presupposition presπ1 ∧ (¬assπ1 → presπ2)
Figure 4: Projection table for (sequential) disjunction.

Finally, what does it mean to process two programs π1 and π2 disjunctively? Taking linear order into account, one proceeds one by one: first execute π1; if this succeeds then done, otherwise execute π2. This leads to the following definition of π1 or π2:

π1 or π2 :≡ π1 ∪ (not π1 ; π2).

Again we apply this to our running example:

male or bachelor  ≡  male ∪ (not male ; bachelor)
 ≡  male ∪ (¬m? ; bachelor)
 ≡  male ∪ (¬m? ; abort)

The projection table for this is given in Table 4.

7.3 Presupposition Accommodation

In many cases where a presupposition of an utterance is violated, the utterance is nevertheless understood. This is called presupposition accommodation: the audience implicitly understands the presupposition as an additional assertion.

In our technical framework, we can define an operation ACC mapping utterances π to ACC(π) by accommodating their presuppositions. The definition of ACC is given by

ACC(π) := presπ? ; assπ?

For the running example case of bachelor, we get:

ACC(bachelor) = (ma)? ; n?

The presupposition of ACC(π) is always ⊤.

7.4 Presuppositions and Dynamic Epistemic Logic

Epistemic logic, the logic of knowledge, is a branch of modal logic where the modality ‘i knows that’ is studied (compare the entries: epistemic logic, logic of belief revision). The dynamic turn in epistemic logic, which took place around 2000, introduced a focus on change of state, but now with states taken to be representations of the knowledge of a set of agents.

If we fix a set of basic propositions P and a set of agents I, then a knowledge state for P and I consists of a set W of possible worlds, together with a valuation function V that assigns a subset of P to each w in W (if wW, then V(w) lists the basic propositions that are true in w) and for each agent iI, a relation Ri stating the epistemic similarities for i (if wRiw′, this means that agent i cannot distinguish world w from world w′). Epistemic models M = (W, V, {Ri | iI}) are known as multimodal Kripke models. Pointed epistemic models are epistemic models with a designated world w0 representing the actual world.

What happens to a given epistemic state (M, w0) = ((W, V, { Ri | iI}), w0) if a public announcement φ is made? Intuitively, the world set W of M is restricted to those worlds wW where φ holds, and the valuation function V and epistemic relations Ri are restricted accordingly. Call the new model M|φ. In case φ is true in w0, the meaning of the public announcement φ can be viewed as a map from (M, w0) to (M|φ, w0). In case φ is false in w0, no update is possible.

Veltman's update logic can be accommodated in public announcement logic (compare the entry on common knowledge) by allowing public announcements of the form ◊φ, where the modality is read as reachability under common knowledge. If an S5 knowledge state for a set of agents (compare the entry on epistemic logic) is updated with the public announcement ◊φ, then in case φ is true somewhere in the model, the update changes nothing (for in this case M|◊φ equals M), and otherwise the update yields inconsistency (since public announcements are assumed to be true). This is in accordance with the update logic definition.

The logical toolbox for epistemic logic with communicative updates is called dynamic epistemic logic or DEL. DEL started out from the analysis of the epistemic and doxastic effects of public announcements (Plaza 1989; Gerbrandy 1999). Public announcement is interesting because it creates common knowledge. There is a variety of other kinds of announcement—private announcements, group announcements, secret sharing, lies, and so on—that also have well-defined epistemic effects. A general framework for a wide class of update actions was proposed in Baltag et al. 1999 and Baltag and Moss 2004. A further generalization to a complete logic of communication and change, with enriched actions that allow changing the facts of the world, is provided in Benthem et al. 2006. A textbook treatment of dynamic epistemic logic is given in Ditmarsch et al. 2006.

To flesh out what “transition to an error state” means one may represent the communicative situation of a language utterance with presupposition in more detail, as follows. Represent what a speaker assumes about what her audience knows or believes, in a multi-agent belief (or knowledge) state, and model the effect of the communicative action on the belief state.

A simple way to handle utterances with presupposition in dynamic epistemic logic is by modelling a presupposition P as a public announcement “it is common knowledge that P”. In cases where it is indeed common knowledge that P, an update with this information changes nothing. In cases where P is not common knowledge, however, the utterance is false, and public announcements of falsehoods yield an inconsistent knowledge state: the analogue of the error state in Subsection 8.2 above.

8. Encoding Dynamics in Typed Logic

Compositionality has always been an important concern in the use of logical systems in natural language semantics (see the entry on compositionality). Through the use of higher order logics (see the entries on second-order and higher-order logics and Church's type theory), a thoroughly compositional account of, e.g., the quantificational system of natural language can be achieved, as is demonstrated in classical Montague grammar (Montague 1974a; Montague 1974b; Montague 1973; compare the entry on logical form). We will review how the dynamic approach can be extended to higher order systems. The link between dynamic semantics and type theory is more like a liaison than a stable marriage: there is no intrinsic need for the connection. The connection is treated here to explain the historical influence of dynamic semantics on Montague grammar.

Most proposals for dynamic versions of Montague grammar develop what are in fact higher order versions of dynamic predicate logic (DPL). This holds for Groenendijk and Stokhof 1990, Chierchia 1992, Muskens 1995, Muskens 1996, Muskens 1994, Eijck 1997, Eijck and Kamp 1997, Kohlhase et al. 1996, Kuschert 2000. These systems all inherit a feature (or bug) from the DPL approach: they make re-assignment destructive. DRT does not suffer from this problem: the discourse representation construction algorithms of Kamp 1981 and Kamp and Reyle 1993 are stated in terms of functions with finite domains, and carefully talk about ‘taking a fresh discourse referent’ to extend the domain of a verifying function, for each new noun phrase to be processed.

In extensional Montague grammar ‘a man’ translates as:

λPx(man xPx).

Here P, of type et, is the variable for the VP slot: it is assumed that VPs denote sets of entities.

In Dynamic Montague Grammar (DMG) in the style of Groenendijk and Stokhof 1990, the translation of an indefinite NP does introduce an anaphoric index. The translation of ‘a man’ would look like

λPλaλa′.∃x(man xPui(ui|x) aa′).

Instead of the basic types e and t of classical extensional Montague grammar, DMG has basic types e, t and m (m for marker). States pick out entities for markers, so they can be viewed as objects of type me. Abbreviating me as s (for ‘state’), we call objects of type sst state transitions. The variable P in the DMG translation of ‘a man’ has type msst, so VP meanings have been lifted from type et to this type. Note that → associates to the right, so msst is shorthand for m → (s → (st)).

Indeed, DMG can be viewed as the result of systematic replacement of entities by markers and of truth values by state transitions. A VP meaning for ‘is happy’ is a function that maps a marker to a state transition. The state transition for marker ui will check whether the input state maps ui to a happy entity, and whether the output context equals the input context.

The variables a, a′ range over states, and the expression (ui|x)a denotes the result of resetting the value of ui in a to x, so the old value of ui gets destroyed (destructive assignment).

The anaphoric index i on reference marker ui is introduced by the translation. In fact, the translation starts from the indexed indefinite noun phrase ‘a mani’.

An alternative treatment is given in Incremental Typed Logic (ITL), an extension to typed logic of a ‘stack semantics’ that is based on variable free indexing and that avoids the destructive assignment problem. The basic idea of the stack semantics for DPL, developed in Vermeulen 1993b, is to replace the destructive assignment of ordinary DPL, which throws away old values when resetting, by a stack valued one, that allows old values to be re-used. Stack valued assignments assign to each variable a stack of values, the top of the stack being the current value. Existential quantification pushes a new value on the stack, but there is also the possibility of popping the stack, to re-use a previously assigned value. ITL Eijck 2000 is in fact a typed version of stack semantics, using a single stack.

Assuming a domain of entities, contexts are finite lists of entities. If c is a context of length n, then we refer to its elements as c[0], …, c[n−1], and to its length as |c|. We will refer to the type of contexts of length i as [e]i. If c is a context in [e]i, then objects of type {0, …, i−1} can serve as indices into c. If c ∈ [e]i and j ∈ {0, …, i−1}, then c[j] is the object of type e that occurs at position j in the context.

A key operation on contexts is extension with an element. If c :: [e]i and x :: e (c is a context of length i and x is an entity) then c^x is the context of length i+1 that has elements c[0], …, c[i−1], x. Thus ^ is an operator of type [e]ie → [e]i+1.

Also note that types like [e]i are in fact polymorphic types, with i acting as a type variable. See Milner 1978.

In ITL there is no destructive assignment, and indefinite noun phrases do not carry indexes in the syntax. The ITL translation of ‘a man’ picks up an index from context, as follows:

λPλcλc′.∃x(man xP|c|(c^x)c′).

Here P is a variable of type {0, …, i} → [e]i+1 → [e]jt, while c is a variable of type [e]i representing the input context of length i, and c′ is a variable of type [e]j representing the output context. Note that the type {0, …, i} → [e]i+1 → [e]jt for P indicates that P first takes an index in the range {0, …, i}, next a context fitting this range (a context of length i+1), next a context of a yet unknown length, and then gives a truth value. P is the type of unary predicates, lifted to the level of context changers, as follows. Instead of using a variable to range over objects to form an expression of type e, a lifted predicate uses a variable ranging over the size of an input context to form an expression that denotes a changer for that context.

The ITL translation of ‘a man’ has type ({0, …, i} → [e]i+1 → [e]jt) → [e]i → [e]jt. In P|c|(c^x)c′, the P variable marks the slot for the VP interpretation; |c| gives the length of the input context to P; it picks up the value i, which is the position of the next available slot when the context is extended. This slot is filled by an object x denoting a man. Note that c^x[|c|] = c^x[i] = x, so the index i serves to pick out that man from the context.

To see that a dynamic higher order system is expressible in ITL, it is enough to show how to define the appropriate dynamic operations. Assume φ and ψ have the type of context transitions, i.e. type [e] → [e] → t (using [e] for arbitrary contexts), and that c, c′, c″ have type [e]. Then we can define the dynamic existential quantifier, dynamic negation and dynamic composition as follows:

E   :=   λcc′.∃x(c^x = c′)
  :=   λcc′.(c = c′ ∧ ¬∃c″ φcc″)
φ ; ψ   :=   λcc′.∃c″(φcc″ ∧ ψcc′)

Dynamic implication, ⇒, is defined in the usual way, by means of ~(φ ; ~ψ).

9. Conclusion

Hopefully, the above has given the reader a sense of Dynamic Semantics as a fruitful and flexible approach to meaning and information processing. Dynamic semantics comes with a set of flexible tools, and with a collection of ‘killer applications’, such as the compositional treatment of Donkey sentences, the account of anaphoric linking, the account of presupposition projection, and the account of epistemic updating. It is to be expected that advances in dynamic epistemic logic will lead to further integration. Some would even suggest that dynamic semantics is (nothing but) the application of dynamic epistemic logic in natural language semantics. But this view is certainly too narrow, although it is true that dynamic epistemic logic offers a promising general perspective on communication.


Other Internet Resources

[Please contact the authors with suggestions.]

Related Entries

anaphora | assertion | Boolean algebra: the mathematics of | common knowledge | compositionality | conditionals | contextualism, epistemic | discourse representation theory | implicature | indexicals | information: semantic conceptions of | logic: classical | logic: conditionals | logic: epistemic | logic: of belief revision | logic: propositional dynamic | logic: second-order and higher-order | logical form | mental content: externalism about | mental content: narrow | mind: computational theory of | model theory: first-order | pragmatics | reference | scientific realism | speech acts | Tarski, Alfred: truth definitions | type theory: Church's type theory