This is a file in the archives of the Stanford Encyclopedia of Philosophy. |

Two specifications are necessary in order to make clear from the beginning what are the limitations and the merits of the program. The only satisfactory explicit models of this type (which are essentially variations and refinements of the one, usually referred to as the GRW theory, proposed in refs. [Ghirardi, Rimini and Weber, 1985, 1986]) are phenomenological attempts to solve a foundational problem. At present, they involve phenomenological parameters which, if the theory is taken seriously, acquire the status of new constants of nature. Moreover, up to now, all attempts to build satisfactory relativistic generalizations of these models have met with serious mathematical difficulties due to the appearance of untractable divergences, even though they elucidate some crucial points and suggest that there is no reason of principle preventing to reach this goal.

In spite of the above remarks, we
think that Collapse Theories have a remarkable relevance, since they
represent a new way to overcome the difficulties of the formalism, to
*close the circle* in the precise sense defined by Abner Shimony
[Shimony, 1989], a way which until a few years ago was considered
impracticable, and which, on the contrary, has been shown to
be perfectly viable. Moreover, they have allowed a clear
identification of the formal features which should characterize any
unified theory of micro and macro processes.

- 1. General Considerations
- 2. The Formalism: A Concise Sketch
- 3. The Macro-Objectification Problem
- 4. The Birth of Collapse Theories
- 5. The Original Collapse Model
- 6. The Continuous Spontaneous Localization Model (CSL)
- 7. A Simplified Version of CSL
- 8. Achievements of Collapse Theories
- 9. Relativistic Dynamical Reduction Models
- 10. Collapse Theories and Definite Perceptions
- 11. The Interpretation of the Theory
- 12. The Problem of the Tails of the Wave Function
- Summary
- Bibliography
- Other Internet Resources
- Related Entries

is not one but the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought [Schrödinger, 1935, p. 807].These two formal features have embarrassing consequences, since they imply

- objective chance in natural processes, i.e., the nonepistemic nature of quantum probabilities;
- objective indefiniteness of physical properties both at the micro and macro level;
- objective entanglement between spatially separated and non-interacting constituents of a composite system, entailing a sort of holism and a precise kind of nonlocality.

For the sake of generality, we shall first of all present a very concise sketch of ‘the rules of the game’.

1. | States of physical systems are associated with
normalized vectors in a Hilbert space, a complex,
infinite-dimensional, linear vector space equipped with a scalar
product. Linearity implies that the superposition principle holds: if
|f> is a state and |g> is a state, then (for
a and b arbitrary complex numbers) also
|is a state. Moreover, the state evolution is linear, i.e., it preserves superpositions: if | f,t> and
|g,t> are the states obtained by evolving the
states |f,0> and |g,0>, respectively, from
the initial time t=0 to the time t, then
a|f,t> +
b|g,t> is the state obtained by the
evolution of a|f,0> +
b|g,0>. Finally, the completeness assumption is
made, i.e., that the knowledge of its statevector represents, in
principle, the most accurate information one can have about the state
of an individual physical system. |

2. | The observable quantities are represented by self-adjoint
operators B on the Hilbert space. The associated eigenvalue equations
B|b_{k}> =
b_{k}|b_{k}>
and the corresponding eigenmanifolds (the linear manifolds spanned by
the eigenvectors associated to a given eigenvalue, also called
eigenspaces) play a basic role for the predictive content of the
theory. In fact: |

i. The eigenvalues b_{k}of an operator B represent the only possible outcomes in a measurement of the corresponding observable.

ii. The norm (i.e. the length) of the projection of the normalized vector (i.e. of length 1) describing the state of the system onto the eigenmanifold associated to a given eigenvalue gives the probability of obtaining the corresponding eigenvalue as the outcome of the measurement. In particular, it is useful to recall that when one is interested in the probability of finding a particle at a given place, one has to resort to the so-called configuration space representation of the statevector. In such a case the statevector becomes a square-integrable function of the position variables of the particles of the system, whose modulus squared yields the probability density for the outcomes of position measurements.

We stress that, according to the above scheme, quantum mechanics makes only conditional probabilistic predictions (conditional on the measurement being actually performed) for the outcomes of prospective (and in general incompatible) measurement processes. Only if a state belongs already before the act of measurement to an eigenmanifold of the observable which is going to be measured, can one predict the outcome with certainty. In all other cases -- if the completeness assumption is made -- one has objective nonepistemic probabilities for different outcomes.

The orthodox position gives a very simple answer to the question: what determines the outcome when different outcomes are possible? Nothing -- the theory is complete and, as a consequence, it is illegitimate to raise any question about possessed properties referring to observables for which different outcomes have non-vanishing probabilities of being obtained. Correspondingly, the referent of the theory are the results of measurement procedures. These are to be described in classical terms and involve in general mutually exclusive physical conditions.

As regards the legitimacy of attributing properties to physical systems, one could say that quantum mechanics warns us against requiring too many properties to be actually possessed by physical systems. However -- with Einstein -- one can adopt as a sufficient condition that one be able (without in any way disturbing the system) to predict with certainty the outcome of a measurement. In this case then, whenever the overall statevector factorizes into the product of a state of the Hilbert space of the physical system S and of the rest of the world, S does possess some properties (actually a complete set of properties, i.e., those associated to a maximal set of commuting observables).

Before concluding this section we must add some comments about the
measurement process. Quantum theory was created to deal with
microscopic phenomena. In order to obtain information about them one
must be able to establish strict correlations between the states of
the microscopic systems and the states of objects we can
perceive. Within the formalism, this is described by considering
appropriate micro-macro interactions. The fact that when the
measurement is completed one can make statements about the outcome is
accounted for by the already mentioned WPR postulate [Dirac, 1948]:
*a measurement always causes a system to jump in an eigenstate of
the observed quantity*. Correspondingly, also the statevector of
the apparatus ‘jumps’ into the manifold associated to the
recorded outcome.

Let us begin by recalling the basic points of the standard argument:

Suppose that a microsystemS, just before the measurement of an observableB, is in the eigenstate |b_{j}> of the corresponding operator. The apparatus (a macrosystem) used to gain information aboutBis initially assumed to be in a precise macroscopic state, its ready state, corresponding to a definite macro property -- e.g., its pointer points at 0 on a scale. Since the apparatusAis made of elementary particles, atoms and so on, it must be described by quantum mechanics, which will associate to it the state vector |A_{0}>. One then assumes that there is an appropriate system-apparatus interaction lasting for a finite time, such that when the initial apparatus state is triggered by the state |b_{j}> it ends up in a final configuration |A_{j}>, which is macroscopically distinguishable from the initial one and from the other configurations |A_{k}> in which it would end up if triggered by a different eigenstate |b_{k}>. Moreover, one assumes that the system is left in its initial state. In brief, one assumes that one can dispose things in such a way that the system-apparatus interaction can be described as:

(1) ( initial state):| b_{k}>|A_{0}>( final state):| b_{k}>|A_{k}>Equation (1) and the hypothesis that the superposition principle governs all natural processes tell us that, if the initial state of the microsystem is a linear superposition of different eigenstates (for simplicity we will consider only two of them), one has:

(2) ( initial state):( a|b_{k}> +b|b_{j}>)|A_{0}>( final state):( a|b_{k}>|A_{k}> +b|b_{j}>|A_{j}>).

Some remarks about this are in order:

- The scheme is highly idealized, both because it takes for granted
that one can prepare the apparatus in a precise state, which is
impossible since we cannot have control over all its degrees of
freedom, and because it assumes that the apparatus registers the
outcome without altering the state of the measured system. However, as
we shall discuss below, these assumptions are by no means essential to
derive the embarrassing conclusion we have to face, i.e., that
the final state is a linear superposition of two states corresponding
to two macroscopically different states of the apparatus. Since
we know that the + representing linear superpositions cannot be
replaced by the logical alternative
*either ... or*, the measurement problem arises: what meaning can one attach to a state of affairs in which two macroscopically and perceptively different states occur simultaneously? - As already mentioned, the standard solution to this problem is
given by the WPR postulate: in a measurement process reduction occurs:
the final state is not the one appearing at the right hand side. of
Eq.(2) but, since macro-objectification takes place, it is
(3) either |

*b*_{k}>|*A*_{k}> or |*b*_{j}>|*A*_{j}> with probabilities |*a*|^{2}and |*b*|^{2}, respectively.

Nowadays, there is a general consensus that this solution is absolutely unacceptable for two basic reasons:

- It corresponds to assuming that the linear nature of the theory is broken at a certain level. Thus, quantum theory is unable to explain how it can happen that the apparata behave as required by the WPR postulate (which is one of the axioms of the theory).
- Even if one were to accept that quantum mechanics has a limited
field of applicability, so that it does not account for all natural
processes and, in particular, it breaks down at the macrolevel, it is
clear that the theory does not contain any precise criterion for
identifying the borderline between micro and macro, linear and
nonlinear, deterministic and stochastic, reversible and
irreversible. To use J.S. Bell's words, there is nothing in the
theory fixing such a borderline and the
*split*between the two above types of processes is fundamentally*shifty*. As a matter of fact, if one looks at the historical debate on this problem, one can easily see that it is precisely by continuously resorting to this ambiguity about the split that adherents of the Copenhagen orthodoxy or*easy solvers*[Bell, 1990] of the measurement problem have rejected the criticism of the*heretics*[Gottfried, 2000]. For instance, Bohr succeeded in rejecting Einstein's criticisms at the Solvay Conferences by stressing that some macroscopic parts of the apparatus had to be treated fully quantum mechanically; von Neumann and Wigner displaced the split by locating it between the physical and the conscious (but what is a conscious being?), and so on. Also other proposed solutions to the problem, notably certain versions of many-worlds interpretations, suffer from analogous ambiguities.

It is not our task to review here the various attempts to solve the above difficulties. One can find many exhaustive treatments of this problem in the literature. On the contrary, we would like to discuss how the macro-objectification problem is indeed a consequence of very general, in fact unavoidable, assumptions on the nature of measurements, and not specifically of the assumptions of von Neumann's model. This was established in a series of theorems of increasing generality, notably the ones by Fine [1970], Shimony [1974], Brown [1986] and Busch and Shimony [1996]. Possibly the most general and direct proof is given by Bassi and Ghirardi [2000], whose results we briefly summarize. The assumptions of the theorem are:

(i) that a microsystem can be prepared in two different eigenstates of an observable (such as, e.g., the spin component along the z-axis) and in a superposition of two such states;(ii) that one has a sufficiently reliable way of ‘measuring’ such an observable, meaning that when the measurement is triggered by each of the two above eigenstates, the process leads in the vast majority of cases to macroscopically and perceptually different situations of the universe. This requirement allows for cases in which the experimenter does not have perfect control of the apparatus, the apparatus is entangled with the rest of the universe, the apparatus makes mistakes, or the measured system is altered or even destroyed in the measurement process;

(iii) that all natural processes obey the linear laws of the theory.

From these very general assumptions one can show that, repeating the measurement on systems prepared in the superposition of the two given eigenstates, in the great majority of cases one ends up in a superposition of macroscopically and perceptually different situations of the whole universe. If one wishes to have an acceptable final situation, one mirroring the fact that we have definite perceptions, one is arguably compelled to break the linearity of the theory at an appropriate stage.

Various investigations during the 1970s can be considered as
preliminary steps for the subsequent developments. In the years
1970-1973 L. Fonda, A. Rimini, T. Weber and myself were seriously
concerned with quantum decay processes and in particular with the
possibility of deriving, within a quantum context, the exponential
decay law [Fonda, Ghirardi, Rimini and Weber; 1973, Fonda *et
al*., 1978]. Some features of this approach are extremely
relevant for the DRP. Let us list them:

- One deals with individual physical systems;
- The statevector is supposed to undergo random processes at random times, inducing sudden changes driving it either within the linear manifold of the unstable state or within the one of the decay products;
- To make the treatment quite general (the apparatus does not know which kind of unstable system it is testing) one is led to identify the random processes with localization processes of the relative coordinates of the decay fragments. Such an assumption, combined with the peculiar resonant dynamics characterizing an unstable system, yields, completely generally, the desired result. The ‘relative position basis’ is the preferred basis of this theory;
- We have applied analogous ideas to measurement processes [Fonda, Ghirardi and Rimini, 1973];
- The final equation for the evolution at the ensemble level is of the quantum dynamical semigroup type and has a structure extremely similar to the final one of the GRW theory.

Obviously, in these papers the reduction processes which are involved were not assumed to be ‘spontaneous and fundamental’ natural processes, but due to system-environment interactions.

Almost in the same years, P. Pearle [Pearle, 1976,1979], and subsequently N. Gisin [Gisin, 1984] and others, had entertained the idea of accounting for the reduction process in terms of a stochastic differential equation. However, they had not given any general suggestion about how to identify the states to which the dynamical equation should lead. Indeed, these states were assumed to depend on the particular measurement process one was considering. Without a clear indication on this point there was no way to identify a mechanism whose effect could be negligible for microsystems but extremely relevant for the macroscopic ones. N. Gisin gave subsequently an extremely interesting proof [Gisin, 1989] that nonlinear modifications of the standard equation without stochasticity are unacceptable since they imply the possibility of sending superluminal signals. Soon afterwards, R. Grassi and myself [Ghirardi and Grassi, 1991] showed that stochastic modifications without nonlinearity can at most induce ensemble and not individual reductions, i.e., they do not guarantee that the state vector of each individual physical system is driven in a manifold corresponding to definite properties.

Within such a model, originally referred to as QMSL (Quantum Mechanics with Spontaneous Localizations), the problem of the choice of the preferred basis is solved by remarking that the most embarrassing superpositions, at the macroscopic level, are those involving different spatial locations of macroscopic objects. Actually, as Einstein has stressed [Born, 1971, p. 223], this is a crucial point which has to be faced by anybody aiming to take a macro-objective position about natural phenomena: ‘A macro-body must always have a quasi-sharply defined position in the objective description of reality’. Accordingly, QMSL considers the possibility of spontaneous processes, which are assumed to occur instantaneously and at the microscopic level, which tend to suppress the linear superpositions of differently localized states. The required trigger mechanism must then follow consistently.

The key assumption of QMSL is the following: each elementary constituent of any physical system is subjected, at random times, to random and spontaneous localization processes (which we will call hittings) around appropriate positions. To have a precise mathematical model one has to be very specific about the above assumptions; in particular one has to make explicit HOW the process works, i.e. which modifications of the wave function are induced by the localizations, WHERE it occurs, i.e. what determines the occurrence of a localization at a certain position rather than at another one, and finally WHEN, i.e. at what times, it occurs. The answers to these questions are as follows.

Let us consider a system of *N* distinguishable particles and
let us denote by
*F*(*q*_{1},
*q*_{2}, ... ,
*q** _{N}*)
the coordinate representation (wave function) of the state vector (we
disregard spin variables since hittings are assumed not to act on
them).

(a) The answer to the question HOW is then: if a hitting occurs for thei-th particle at point, the wave function is instantaneously multiplied by a Gaussian function (appropriately normalized)xwhereG(q_{i},) =xKexp[(1/2d^{2})(q_{i})x^{2}],drepresents the localization accuracy. Let us denote asthe wave function immediately after the localization, as yet unnormalized.L_{i}(q_{1},q_{2}, ... ,q_{N};) =xF(q_{1},q_{2}, ... ,q_{N})G(q_{i},)x(b) As concerns the specification of WHERE the localization occurs, it is assumed that the probability density

P() of its taking place at the pointxis given by the norm of the statexL_{i}(the length, or to be more precise, the integral of the modulus squared of the functionL_{i}over the 3N-dimensional space). This implies that hittings occur with higher probability at those places where, in the standard quantum description, there is a higher probability of finding the particle. Note that the above prescription introduces nonlinear and stochastic elements in the dynamics. The constantKappearing in the expression ofG(q_{i},) is chosen in such a way that the integral ofxP() over the whole space equals 1.x(c) Finally, the question WHEN is answered by assuming that the hittings occur at randomly distributed times, according to a Poisson distribution, with mean frequency

f.

It is straightforward to convince oneself that the hitting process
leads, when it occurs, to the suppression of the linear
superpositions of states in which the same particle is well localized
at different positions separated by a distance greater than
*d*. As a simple example we can consider a single particle
whose wavefunction is different from zero only in two small and far
apart regions *h* and *t*. Suppose that a localization
occurs around *h*; the state after the hitting is then
appreciably different from zero only in a region around *h*
itself. A completely analogous argument holds for the case in which
the hitting takes place around *t*. As concerns points which
are far from both *h* and *t*, one easily sees that the
probability density for such hittings , according to the
multiplication rule determining *L*_{i},
turns out to be practically zero, and moreover, that if such a
hitting were to occur, after the wave function is normalized, the
wave function of the system would remain almost unchanged.

We can now discuss the most important feature of the theory, i.e.,
the Trigger Mechanism. To understand the way in which the spontaneous
localization mechanism is enhanced by increasing the number of
particles which are in far apart spatial regions (as compared to
*d*), one can consider, for simplicity, the superposition
|*S*>, with equal weights, of two macroscopic pointer
states |*H*> and |*T*>, corresponding to two
different pointer positions *H* and *T*,
respectively. Taking into account that the pointer is ‘almost
rigid’ and contains a macroscopic number *N* of
microscopic constituents, the state can be written, in obvious
notation, as:

(4) |S> = [|1 nearh_{1}> ... |Nnearh_{N}> + |1 neart_{1}> ... |Nneart_{N}>],

where *h*_{i} is near *H*, and
*t*_{i} is near *T*. The states
appearing in first term on the right-hand side of Eq. (4) have
coordinate representations which are different from zero only when
their arguments (1,...,*N*) are all near *H*, while
those of the second term are different from zero only when they are
all near *T*. It is now evident that if any of the particles
(say, the *i*-th particle) undergoes a hitting process,
e.g. near the point *h*_{i}, the
multiplication prescription leads practically to the suppression of
the second term in (4). Thus any spontaneous localization of any of
the constituents amounts to a localization of the pointer. The
hitting frequency is therefore effectively amplified proportionally
to the number of constituents. Notice that, for simplicity, the
argument makes reference to an almost rigid body, i.e. to one for
which all particles are around *H* in one of the states of the
superposition and around *T* in the other. It should however
be obvious that what really matters in amplifying the reductions is
the number of particles which are in different positions in the two
states appearing in the superposition itself.

Under these premises we can now proceed to choose the parameters
*d* and *f* of the theory, i.e., the localization
accuracy and the mean localization frequency. The argument just given
allows one to understand how one can choose the parameters in such a
way that the quantum predictions for microscopic systems remain fully
valid while the embarrassing macroscopic superpositions in
measurement-like situations are suppressed in very short
times. Accordingly, as a consequence of the unified dynamics
governing all physical processes, individual macroscopic objects
acquire definite macroscopic properties. The choice suggested in the
GRW-model is:

(5) f= 10^{-16}s^{-1}d= 10^{-5}cm

It follows that a microscopic system undergoes a localization, on
average, every hundred million years, while a macroscopic one
undergoes a localization every 10^{-7} seconds. With
reference to the challenging version of the macro-objectification
problem presented by Schrödinger with the famous example of his
cat, J.S. Bell comments [Bell, 1987, p.44]: [within QMSL] *the cat
is not both dead and alive for more than a split second*
. Besides the extremely low frequency of the hittings for microscopic
systems, also the fact that the localization width is large compared
to the dimensions of atoms (so that even when a localization occurs
it does very little violence to the internal economy of an atom)
plays an important role in guaranteeing that no violation of
well-tested quantum mechanical predictions is implied by the modified
dynamics.

Some remarks are appropriate. First of all, QMSL, being precisely
formulated, allows to locate precisely the ‘split’ between
micro and macro, reversible and irreversible, quantum and
classical. The transition between the two types of
‘regimes’ is governed by the number of particles which are
well localized at positions further apart than 10^{-5} cm in
the two states whose coherence is going to be dynamically suppressed.
Second, the model is, in principle, testable against quantum
mechanics. As a matter of fact, an essential part of the program
consists in proving that its predictions do not contradict any
established fact about microsystems and macrosystems.

A more satisfactory solution to this problem (see however the remarks at the end of this subsection) can be obtained by injecting the physically appropriate principles of the GRW model within the approach of P. Pearle. This line of thought has led to a quite elegant formulation of a dynamical reduction model, usually referred to as CSL [Pearle, 1989; Ghirardi, Pearle and Rimini, 1990] in which the discontinuous jumps which characterize QMSL are replaced by a continuous stochastic evolution in the Hilbert space (a sort of Brownian motion of the statevector).

We will not enter into the rather technical details of this interesting development of the original GRW proposal, since the basic ideas and physical implications are precisely the same as those of the original formulation. Actually, one could argue that the above idea of tackling the problem of identical particles by considering the average particle number within an appropriate volume is correct. In fact it has been proved [Ghirardi, Pearle and Rimini, 1990] that for any CSL dynamics there is a hitting dynamics which, from a physical point of view, is ‘as close to it as one wants’. Instead of entering into the details of the CSL formalism, it is useful, for the discussion below, to analyze a simplified version.

(6) exp{f[(n_{1}m_{1})^{2}+ (n_{2}m_{2})^{2}+ ...]t},

the sum being extended to all cells in the universe. Apart from
differences relating to the identity of the constituents, the overall
physics is quite similar to that implied by QMSL. Obviously, there are
interesting physical implications which deserve to be discussed. A
detailed analysis has been presented in [Ghirardi and Rimini, 1990].
As shown there and as follows from estimates about possible effects
for superconducting devices [Rae, 1990; Gallis and Fleming, 1990;
Rimini, 1995], and for the excitation of atoms [Squires, 1991], it
turns out not to be possible, with present technology, to perform
clear-cut experiments allowing to discriminate the model from standard
quantum mechanics [Benatti *et al*., 1995].

There is however an interesting aspect which might be relevant to
idea of relating the suppression of coherence to gravitational
effects. Given Eq. (6), notice that the worst case scenario (from the
point of view of the time necessary to suppress coherence) is the
superposition of two states for which the occupation numbers of the
individual cells differ only by one unit. Indeed, in this case the
amplifying effect of taking the square of the differences
disappears. Let us then raise the question: how many nucleons (at
worst) should occupy different cells, in order for the given
superposition to be dynamically suppressed within the time which
characterizes human perceptual processes? Since such a time is of the
order of 10^{-2} sec and *f* = 10^{-16}
sec^{-1}, the number of displaced nucleons must be of the
order of 10^{18}, which corresponds, to a remarkable
accuracy, to a Planck mass. This figure seems to point in the same
direction as attempts such as Penrose's attempts to relate
reduction mechanisms to quantum gravitational effects [Penrose,
1989].

We often discussed his notions on objective reality. I recall that during one walk Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it [Pais, 1982, p. 5].In the context of Einstein's remarks in

as a consequence, there is hardly likely to be anyone who would be inclined to consider seriously [...] that the existence of the location is essentially dependent upon the carrying out of an observation made on the registration strip. For, in the macroscopic sphere it simply is considered certain that one must adhere to the program of a realistic description in space and time; whereas in the sphere of microscopic situations one is more readily inclined to give up, or at least to modify, this program [p. 671].However,

the ‘macroscopic’ and the ‘microscopic’ are so inter-related that it appears impracticable to give up this program in the ‘microscopic’ alone [p. 674].One might speculate that Einstein would not have taken the DRP seriously, given that it is a fundamentally indeterministic program. On the other hand, the DRP allows precisely for this middle ground, between giving up a ’realistic description in space and time’ altogether (the moon is not there when nobody looks), and requiring that it be applicable also at the microscopic level (as some kind of ‘hidden variables’ theory). It would seem that the pursuit of ‘realism’ for Einstein was more a program that had been very successful rather than an a priori commitment, and that in principle he would have welcomed attempts to give up or weaken microrealistic requirements, provided they allowed one to adopt a macrorealist position.

In the DRP, we can say of an electron in an EPR-Bohm situation that ‘when nobody looks’, it has no definite spin in any direction , and in particular that when it is in a superposition of two states localised far away from each other, it cannot be thought to be at a definite place. In the macrorealm, however, objects do have definite positions and are generally describable in classical terms. That is, the DRP program is not adding ‘hidden variables’ to the theory, but the moon is definitely there even if no sentient being has ever looked at it, or, in the words of J. S. Bell, the DRP

allows electrons (in general microsystems) to enjoy the cloudiness of waves, while allowing tables and chairs, and ourselves, and black marks on photographs, to be rather definitely in one place rather than another, and to be described in classical terms [Bell, 1986, p. 364].Such a program, as we have seen, is realized by assuming only the existence of wave functions, and by proposing a unified dynamics that will govern both microscopic processes and ‘measurements’. As regards the latter, no vague definitions are needed in order to apply the theory. The equations are followed exactly, and the macroscopic ambiguities that would arise from the linear evolution are theoretically possible, but only of momentary duration, and thus arguably of no practical importance and no source of embarrassment.

We have not analyzed yet the implications about locality, but since
in the DRP program no hidden variables are introduced, the situation
can be no worse that in ordinary quantum mechanics: *‘by
adding mathematical precision to the jumps in the wave function, it
simply makes precise the action at a distance of ordinary quantum
mechanics’* [Bell, 1987, p. 46]. Indeed, a detailed
investigation of the locality properties of the theory becomes
possible, and one can investigate whether the theory represents an
approximation to a relativistically invariant theory. The analysis
carried on so far, however, proves that at least in the
non-relativistic version a program of dynamical reduction can be
consistently developed. Moreover, as it will become clear when we
will discuss the interpretation of the theory in terms of mass
density, the QMSL and CSL theories lead in a natural way to attach
definite properties in space and time to macroscopic objects, the
main objective of Einstein's requirements.

The achievements of the DRP which are relevant for the debate about the foundations of quantum mechanics can also be concisely summarized in the words of H.P. Stapp:

The collapse mechanisms so far proposed could, on the one hand, be viewed as ad hoc mutilations designed to force ontology to kneel to prejudice. On the other hand, these proposals show that one can certainly erect a coherent quantum ontology that generally conforms to ordinary ideas at the macroscopic level [Stapp, 1989, p. 157].

As is well known, [Suppes and Zanotti, 1976; van Fraassen, 1982; Jarrett, 1984; Shimony, 1983; see also the entry on Bell's Theorem], Bell's locality assumption is equivalent to the conjunction of two other assumptions, viz., in Shimony's terminology, parameter independence and outcome independence. In view of the experimental violation of Bell's inequality, one has to give up either or both of these assumptions. The above splitting of the locality requirement into two logically independent conditions is particularly useful in discussing the different status of CSL and deterministic hidden variable theories with respect to relativistic requirements. In fact, as proved by Jarrett himself, when parameter independence is violated, if one had access to the variables which specify completely the state of individual physical systems, one could send faster-than-light signals from one wing of the apparatus to the other. Moreover, in ref. [Ghirardi and Grassi, 1994, 1996] it has been shown that it is impossible to build a genuine relativistically invariant theory which, in its nonrelativistic limit, exhibits parameter dependence and does not entail backward causation. On the other hand, if locality is violated only by the occurrence of outcome dependence then faster-than-light signaling cannot be achieved.

Now, it is well known that any deterministic theory (i.e., one in
which the range of all probability distributions for outcomes is the
set {0,1}) that reproduces quantum predictions must exhibit parameter
dependence. This fact by itself suggest that such theories will
certainly meet more serious difficulties with relativity than
theories like standard quantum mechanics which violate only outcome
independence and which do not allow faster-than-light signaling
[Eberhard, 1978; Ghirardi, Rimini and Weber, 1980; Ghirardi, Grassi,
Rimini and Weber, 1988]. What about CSL? It has been possible to
prove [Ghirardi, Grassi, Butterfield and Fleming, 1993; Butterfield
*et al*., 1993] that it, too, violates Bell's locality
only by violating outcome independence. This is to some extent
encouraging; even though, as we will be led to conclude, it seems
extremely difficult to build a relativistic model inducing
reductions, this result shows that there are no reasons of principle
making such a project unviable.

Let us be more specific. The first attempt to obtain a relativistic generalization of dynamical reduction models was presented in ref. [Pearle, 1990]. It should be stressed that having individual reductions prevents the theory from being invariant at the individual level (note that QMSL and CSL are not even Galilei invariant at the individual level). Thus one is led to introduce a generalization of the invariance requirement: the theory must be stochastically invariant. This means that, even though the individual processes may look different to different observers, any two of them will agree on the composition of the final ensemble for (subjectively) the same initial conditions. We remark that it is precisely in this sense that both QMSL and CSL turn out to be Galilei invariant.

Pearle [1990] considered a fermion field coupled to a meson field and has put forward the idea of inducing localizations for the fermions through their coupling to the mesons and a stochastic dynamical reduction mechanism acting on the meson variables. He considered Heisenberg evolution equations for the coupled fields and a Tomonaga-Schwinger CSL-type evolution equation with a skew-hermitian coupling to a c-number stochastic potential for the state vector. This approach has been systematically investigated in refs. [Ghirardi, Grassi and Pearle, 1990a, 1990b] to which we refer the reader for a detailed discussion. Here we limit ourselves to stressing that, under certain approximations, one obtains in the non-relativistic limit a CSL-type equation inducing spatial localization. However, due to the white noise nature of the stochastic potential, novel renormalization problems arise: the increase per unit time and per unit volume in the energy of the meson field is infinite due to the fact that infinitely many mesons are created. For the reasons we have just discussed one cannot say that the possibility of generalizing CSL to the relativistic case has been established. Not even more recent attempts have succeeded in overcoming these difficulties.

Nevertheless, the efforts which have been spent on such a program have led to a better understanding of some points and have thrown light on important conceptual issues. First, they have led to a completely general and rigorous formulation of the concept of stochastic invariance [Ghirardi, Grassi and Pearle, 1990a]. Second, they have prompted a critical reconsideration, based on the discussion of smeared observables with compact support, of the problem of locality at the individual level. This analysis has brought out the necessity of reconsidering the criteria for the attribution of objective local properties to physical systems. In specific situations, one cannot attribute any local property to a microsystem: any attempt to do so gives rise to ambiguities. However, in the case of macroscopic systems, the impossibility of attributing to them local properties (or, equivalently, the ambiguity surrounding such properties) lasts only for time intervals of the order of those necessary for the dynamical reduction to take place. Moreover, no objective property corresponding to a local observable, even for microsystems, can emerge as a consequence of a measurement-like event occurring in a space-like separated region: such properties emerge only in the future light cone of the macroscopic event considered. Finally, recent investigations [Ghirardi and Grassi, 1994, 1996; Ghirardi, 1996, 2000] have shown that the very formal structure of the theory is such that it does not allow, even conceptually, to establish cause-effect relations between space-like events.

Having listed some interesting results obtained along these lines, in concluding this section it is necessary to stress once more the immense difficulties that the program of a relativistic generalization has met until now. The question of whether such a program will find a satisfactory formulation still remains ‘the big problem’ for this type of investigations.

We have presented a detailed answer to this criticism [Aicardi *et
al*., 1991]. The crucial points of our argument are the
following: we perfectly agree that in the case considered the
superposition persists for long times (actually the superposition
must persist, since, the system under consideration being
microscopic, one could perform interference experiments which
everybody would expect to confirm quantum mechanics). However, to
deal in the appropriate and correct way with such a criticism, one
has to consider all the systems which enter into play (electron,
screen, photons and brain) and the universal dynamics governing all
relevant physical processes. A simple estimate of the number of ions
which are involved in the visual perception mechanism makes perfectly
plausible that, in the process, a sufficient number of particles are
displaced by a sufficient spatial amount to satisfy the conditions
under which, according to QMSL, the suppression of the superposition
of the two nervous signals will take place within the time scale of
perception.

To avoid misunderstandings, this analysis by no means amounts to attributing a special role to the conscious observer or to perception. The observer's brain is the only system present in the set-up in which a superposition of two states involving different locations of a large number of particles occurs. As such it is the only place where the reduction can and actually must take place according to the theory. It is extremely important to stress that if in place of the eye of a human being one puts in front of the photon beams a spark chamber or a device leading to the displacement of a macroscopic pointer, or producing ink spots on a computer output, reduction will equally take place. In the given example, the human nervous system is simply a physical system, a specific assembly of particles, which performs the same function as one of these devices, if no other such device interacts with the photons before the human observer does. It follows that it is incorrect and seriously misleading to claim that QMSL requires a conscious observer to make definite the macroscopic properties of physical systems.

A further remark may be appropriate. The above analysis could be taken by the reader as indicating a very naive and oversimplified attitude towards the deep problem of the mind-brain correspondence. There is no claim and no presumption that QMSL allows a physicalist explanation of conscious perception. It is only pointed out that, for what we know about the purely physical aspects of the process, one can state that before the nervous pulses reach the higher visual cortex, the conditions guaranteeing the suppression of one of the two signals are verified. In brief, a consistent use of the dynamical reduction mechanism in the above situation accounts for the definiteness of the conscious perception, even in the extremely peculiar situation devised by Albert and Vaidman.

The question of the correct interpretation of the theory has been the subject of debate, some of the principal approaches having originated with J. S. Bell. Given that the wavefunction itself is an object in the (higher-dimensional) configuration space, Bell was particularly keen to identify what could be taken as some kind of ‘local beable’, from which one could obtain a representation of the perceived reality in ordinary three-dimensional space. In the specific context of QMSL, Bell [1987, p. 45] suggested that the ‘GRW jumps’, which we called ‘hittings’ above, could play this role. Later, he suggested that the most natural interpretation for the wavefunction in the context of a collapse theory would be as describing the ‘density [...] of stuff’ in configuration space [Bell, 1990, p. 30]. The interpretation which, in the opinion of the present writer, is most appropriate for collapse theories [Ghirardi, Grassi and Benatti, 1995, Ghirardi, 1997a, 1997b] was ultimately developed from this suggestion, together with the firm conviction that an acceptable interpretation should establish precise links between our formal description of physical processes and the events taking place in the three-dimensional space we ‘see’ around us.

First of all, various investigations [Pearle and Squires, 1994]
had made clear that QMSL and CSL needed a modification, i.e., the
characteristic localization frequency of the elementary constituents
of matter had to be made proportional to the mass characterizing the
particle under consideration. In particular, the original frequency
for the hitting processes *f* = 10^{-16}
sec^{-1} is the one characterizing the nucleons, while, e.g.,
electrons would suffer hittings with a frequency reduced by about
2000 times. Unfortunately we have no space to discuss here the
physical reasons which make this choice appropriate; we refer the
reader to the above paper, as well as to the recent detailed analysis
by Peruzzi and Rimini [Peruzzi and Rimini, 2000]. With this
modification, what the nonlinear dynamics strives to make
‘objectively definite’ is the average mass distribution in
the whole universe (averaged over appropriate volumes associated with
the characteristic localization accuracy of the theory). Second, a
deep critical reconsideration [Ghirardi, Grassi and Benatti, 1995] has
made evident how the concept of ‘distance’ that
characterizes the Hilbert space is inappropriate in accounting for
the similarity or difference between macroscopic situations. Just to
give a convincing example, consider three states |*h*>,
|*h**> and |*t*> of a macrosystem (let us say a
massive macroscopic bulk of matter), the first corresponding to its
being located here, the second to its having the same location but
one of its atoms (or molecules) being in a state orthogonal to the
corresponding state in |*h*>, and the third having exactly
the same internal state of the first but being differently located
(there). Then, despite the fact that the first two states are
practically indistinguishable from each other at the macrolevel,
while the first and the third correspond to completely different and
directly perceivable situations, the Hilbert space distance between
|*h*> and |*h**>, is equal to that between
|*h*> and |*t*>.

When the localization frequency is related to the mass of the
constituents, then, as above and completely generally (i.e. even when
one is dealing with a body which is not almost rigid, such as a gas
or a cloud), the mechanism leading to the suppression of the
superpositions of macroscopically different states is fundamentally
governed by the sum (or the integral) of the squared differences of
the mass densities associated to the two superposed states, averaged
over the characteristic volume of the theory, i.e., 10^{-15}
cm ^{3} . This suggests taking the following attitude: what
the theory is about, what is real ‘out there’ at a given
space point **x**, is just the average mass density in
the characteristic volume around **x** :

(7)M(,xt) = <F,t|M() |xF,t>,

where *M*(** x**) is the mass density
operator corresponding to the given volume around

one should not tolerate tails in wave functions which are so broad that their different parts can be discriminated by the senses, even if very low probability amplitude is assigned to them.After a localization of a macroscopic system, typically the pointer of the apparatus, its centre of mass will be associated to a wave function which is different from zero over the whole space. If one adopts the probabilistic interpretation of the standard theory, this means that even when the measurement process is over, there is a nonzero (even though extremely small) probability of finding its pointer in an arbitrary position, instead of the one corresponding to the registered outcome. This is taken as inacceptable, as indicating that the DRP does not actually overcome the macro-objectification problem.

Let us state immediately that the (alleged) problem arises entirely from keeping the standard interpretation of the wave function unchanged, in particular assuming that its modulus squared gives the probability density of the position variable. However, as we have discussed in the previous section, there are much more serious reasons of principle which require to abandon the probabilistic interpretation and replace it either with one of those proposed by Bell, or, more appropriately in our opinion, with the mass density interpretation have outlined above.

Before entering into a detailed discussion of this subtle point we
need to focus the problem better. We cannot avoid making two
remarks. Suppose one adopts, for the moment, the conventional quantum
position. We agree that, within such a framework, the fact that wave
functions never have strictly compact spatial support can be
considered puzzling. However this is a problem arising directly from
the mathematical features (spreading of wave functions) and from the
probabilistic interpretation of the theory, and not at all a problem
peculiar to the dynamical reduction models. Indeed, the fact that,
e.g., the wave function of the centre of mass of a pointer or of a
table has not a compact support has never been taken to be a problem
for standard quantum mechanics. When the wave function is extremely
well peaked around a given point in space, it has always been
accepted that it describes a table located at a certain position, and
that this corresponds in some way to our perception of it. It is
obviously true that, for the given wave function, the quantum rules
entail that if a measurement were performed the table could be found
(with an extremely small probability) to be kilometers far away, but
this *is not* the measurement or the macro-objectification
problem of the standard theory. The latter concerns a completely
different situation, i.e., that in which one is confronted with a
superposition with comparable weights of two macroscopically
separated wave functions, both of which possess tails (i.e., have
non-compact support) but are appreciably different from zero only in
very narrow intervals. This is the really embarrassing situation
which conventional quantum mechanics is unable to make
understandable. To which perception of the position of the table does
this wave function correspond?

The implications for this problem of the adoption of the QMSL theory should be obvious. Within QMSL, the superposition of two states which, when considered individually, are assumed to lead to different and definite perceptions of macroscopic locations, are dynamically forbidden. If some process tends to produce such superpositions, then the reducing dynamics induces the localization of the centre of mass (the associated wave function being appreciably different from zero only in a narrow and precise interval). Correspondingly, the possibility arises of attributing to the system the property of being in a definite place and thus of accounting for our definite perception of it. Summarizing, we stress once more that the criticism about the tails as well as the requirement that the appearance of macroscopically extended (even though extremely small) tails be strictly forbidden is exclusively motivated by uncritically committing oneself to the probabilistic interpretation of the theory, even for what concerns the psycho-physical correspondence: states assigning non-exactly vanishing probabilities to different outcomes of position measurements must correspond to ambiguous perceptions about these positions. Since neither within the standard formalism nor within the framework of dynamical reduction models a wave function can have compact support, taking such a position leads to conclude that it is just the Hilbert space description of physical systems which has to be given up.

It ought to be stressed that there is nothing in the GRW theory which would make the choice of functions with compact support problematic for the purpose of the localizations, but it also has to be noted that following this line would be totally useless: since the evolution equation contains the kinetic energy term, any function, even if it has compact support at a given time, will instantaneously spread acquiring a tail extending over the whole of space. If one sticks to the probabilistic intepretation and one accepts the completeness of the description of the states of physical systems in terms of the wave function, the tail problem cannot be avoided.

The solution to the tails problem can only derive from abandoning
completely the probabilistic interpretation and from adopting a more
physical and realistic intepretation relating ‘what is out
there’ to, e.g., the mass density distribution over the whole
universe. In this connection, the following example will be
instructive [Ghirardi, Grassi and Benatti, 1995]. Take a massive sphere
of normal density and mass of about 1 kg. Classically, the mass of
this body would be totally concentrated within the radius of the
sphere, call it *r*. In QMSL, after the extremely short time
interval in which the collapse dynamics leads to a ‘regime’
situation, and if one considers a sphere with radius *r* +
10^{-5} cm, the integral of the mass density over the rest of
space turns out to be an incredibly small fraction (of the order of 1
over 10 to the power 10^{15}) of the mass of a single proton.
In such conditions, it seems quite legitimate to claim that the
macroscopic body is localised within the sphere.

However, also this quite reasonable position has been questioned and
it has been claimed [Lewis, 1997], that the very existence of the
tails implies that the enumeration principle (i.e. the fact that the
claim ‘particle 1 is within this box & particle 2 is within this
box & ... & particle *n* is within this box & no
other particle is within this box’ implies the claim ‘there
are *n* particles within this box’) does not hold, if one
takes seriously the mass density interpretation of collapse
theories. This paper has given rise to a long debate which would be
inappropriate to reproduce here. We refer the reader to the following
papers [Ghirardi and Bassi, 1999; Clifton and Monton, 1999a, 1999b;
Bassi and Ghirardi, 1999, 2001]. Various arguments have been
presented in favour and against the criticism by Lewis.

We would like to conclude this brief analysis by stressing once more that, in our opinion, all the disagreements and the misunderstandings concerning this problem have their origin in the fact that the idea that the probabilistic interpretation of the wave function must be abandoned has not been fully accepted by the authors who find some difficulties in the proposed mass density intepretation of the Collapse Theories.

- Aicardi, F., Borsellino, A., Ghirardi, G.C., and Grassi, R. [1991],
‘Dynamic models for state-vector reduction - Do they ensure that
measurements have outcomes?’,
*Foundations of Physics Letters*,**4**, 109. - Albert, D.Z. [1990], ‘On the Collapse of the Wave
Function’, in
*Sixty-Two Years of Uncertainty*, A. Miller (ed.), Plenum, New York. - -----. [1992],
*Quantum Mechanics and Experience*, Harvard University Press, Cambridge, Mass. - Albert, D.Z., and Vaidman, L. [1989], ‘On a proposed
postulate of state reduction’, ,
*Physics Letters*,**A139**, 1. - Bassi, A., and Ghirardi, G.C. [1999], ‘More about dynamical
reduction and the enumeration principle’,
*British Journal for the Philosophy of Science*,**50**, 719. - -----. [2000], ‘A general argument against the universal
validity of the superposition principle’,
*Physics Letters*,**A 275**, 373. - -----. [2001], ‘Counting marbles: Reply to Clifton and
Monton’,
*British Journal for the Philosophy of Science*,**52**, 125. - Bell, J.S. [1986], ‘Six possible worlds of quantum
mechanics’, in
*Proceedings of the Nobel Symposium 65: Possible Worlds in Arts and Sciences,*de Gruyter, New York. - -----. [1987], ‘Are there quantum jumps?’, in
*Schrödinger -- Centenary Celebration of a Polymath*, C.W. Kilmister (ed.), Cambridge University Press, Cambridge. - -----. [1990], ‘Against "measurement"’, in
*Sixty-Two Years of Uncertainty*, A. Miller (ed.), Plenum, New York. - Benatti, F., Ghirardi, G.C., and Grassi, R. [1995], ‘Quantum
Mechanics with Spontaneous Localization and Experiments’, in
*Advances in quantum Phenomena*, E. Beltrametti*et al*. (eds), Plenum, New York. - Bohm, D. [1952], ‘A suggested interpretation of the quantum
theory in terms of hidden variables. I & II.’
*Physical Review*,**85**, 166,*ibid*.,**85**, 180. - Bohm, D., and Bub, J. [1966], ‘A proposed solution of the
measurement problem in quantum mechanics by a hidden variable
theory’,
*Reviews of Modern Physics*,**38**, 453. - Born, M. [1971],
*The Born-Einstein Letters*, Walter and Co., New York. - Brown, H.R. [1986], ‘The insolubility proof of the quantum
measurement problem’,
*Foundations of Physics*,**16**, 857. - Busch, P., and Shimony, A. [1996], ‘Insolubility of
the quantum measurement problem for unsharp observables’,
*Studies in History and Philosophy of Modern Physics*,**27B**, 397. - Butterfield, J., Fleming, G.N., Ghirardi, G.C., and Grassi, R.
[1993], ‘Parameter dependence in dynamical models for
state-vector reduction’,
*International Journal of Theoretical Physics*,**32**, 2287. - Clifton, R., and Monton, B. [1999a], ‘Losing your marbles in
wavefunction collapse theories’,
*British Journal for the Philosophy of Science*,**50**, 697. - -----. [1999b], ‘Counting marbles with "accessible" mass
density: A reply to Bassi and Ghirardi’,
*British Journal for the Philosophy of Science*,**51**, 155. - Dirac, P.A.M. [1948],
*Quantum Mechanics*, Clarendon Press, Oxford. - Eberhard, P. [1978], ‘Bell's theorem and different
concepts of locality’,
*Nuovo Cimento*,**46B**, 392. - Fine, A. [1970], ‘Insolubility of the quantum measurement
problem’,
*Physical Review*,**D2**, 2783. - Fonda, L., Ghirardi, G.C., and Rimini A. [1973], ‘Evolution
of quantum systems subject to random measurements’,
*Nuovo Cimento*,**18B**, 1. - -----. [1978], ‘Decay theory of unstable quantum
systems’,
*Reports on Progress in Physics*,**41**, 587. - Fonda, L., Ghirardi, G.C., Rimini, A., and Weber, T. [1973],
‘Quantum foundations of exponential decay law’,
*Nuovo Cimento*,**15A**, 689. - Gallis, M.R., and Fleming, G.N. [1990], ‘Environmental and
spontaneous localization’,
*Physical Review*,**A42**, 38. - Ghirardi, G.C. [1996], ‘Properties and events in a
relativistic context: Revisiting the dynamical reduction
program’,
*Foundations of Physics Letters*,**9**, 313. - -----. [1997a], ‘Quantum Dynamical Reduction and Reality:
Replacing Probability Densities with Densities in Real Space’,
*Erkenntnis*,**45**, 349. - -----. [1997b], ‘Macroscopic Reality and the Dynamical
Reduction Program’, in
*Structures and Norms in Science*, M.L. Dalla Chiara (ed.), Kluwer, Dordrecht. - -----. [2000], ‘Local measurements of nonlocal observables
and the relativistic reduction process’,
*Foundations of Physics*,**30**, 1337. - Ghirardi, G.C., and Bassi, A. [1999], ‘Do dynamical
reduction models imply that arithmetic does not apply to ordinary
macroscopic objects’,
*British Journal for the Philosophy of Science*,**50**, 49. - Ghirardi, G.C., and Grassi, R. [1991], ‘Dynamical Reduction
Models: some General Remarks’, in
*Nuovi Problemi della Logica e della Filosofia della Scienza*, D. Costantini*et al*. (eds), Editrice Clueb, Bologna. - -----. [1994], ‘Outcome predictions and property attribution
- The EPR argument reconsidered’,
*Studies in the History and Philosophy of Science*,**25**, 397. - -----. [1996], ‘Bohm's Theory versus Dynamical
Reduction’, in
*Bohmian Mechanics and Quantum Theory: an Appraisal*, J. Cushing*et al*. (eds), Kluwer, Dordrecht. - Ghirardi, G.C., Grassi, R., and Benatti, F. [1995],
‘Describing the macroscopic world - Closing the circle within
the dynamical reduction program’,
*Foundations of Physics*,**25**, 5. - Ghirardi, G.C., Grassi, R., Butterfield, J., and Fleming,
G.N. [1993], ‘Parameter dependence and outcome dependence in
dynamic models for state-vector reduction’,
*Foundations of Physics*,**23**, 341. - Ghirardi, G.C., Grassi, R., and Pearle, P. [1990a],
‘Relativistic dynamic reduction models - General framework and
examples’,
*Foundations of Physics*,**20**, 1271. - -----. [1990b], ‘Relativistic Dynamical Reduction Models and
Nonlocality’, in
*Symposium on the Foundations of Modern Physics 1990*, P. Lahti and P. Mittelstaedt (eds), World Scientific, Singapore. - Ghirardi, G.C., Grassi, R., Rimini, A., and Weber, T. [1988],
‘Experiments of the electron-paramagnetic-res type involving
CP-violation do not allow faster-than-light communication between
distant observers’,
*Europhysics Letters*,**6**, 95. - Ghirardi, G.C., Pearle, P., and Rimini, A. [1990],
‘Markov-processes in Hilbert-space and continuous spontaneous
localization of systems of identical particles’,
*Physical Review*,**A42**, 78. - Ghirardi, G.C., and Rimini, A. [1990], ‘Old and New Ideas
in the Theory of Quantum Measurement’, in
*Sixty-Two Years of Uncertainty*, A. Miller (ed.), Plenum, New York . - Ghirardi, G.C., Rimini, A., and Weber, T. [1980], ‘General
argument against superluminal transmission through the
quantum-mechanical measurement process’,
*Lettere al Nuovo Cimento*,**27**, 293. - -----. [1985], ‘A Model for a Unified Quantum Description of
Macroscopic and Microscopic Systems’, in
*Quantum Probability and Applications*, L. Accardi*et al*. (eds), Springer, Berlin. - -----. [1986], ‘Unified dynamics for microscopic and
macroscopic systems’,
*Physical Review,***D 34**, 470. - Gisin, N. [1984], ‘Quantum measurements and stochastic
processes’,
*Physical Review Letters*,**52**, 1657, and ‘Reply’,*ibid*.,**53**, 1776. - -----. [1989], ‘Stochastic quantum dynamics and
relativity’,
*Helvetica Physica Acta*,**62**, 363. - Gottfried, K. [2000], ‘Does Quantum Mechanics Carry the
Seeds of its own Destruction?’, in
*Quantum Reflections*, D. Amati*et al*. (eds), Cambridge University Press, Cambridge. - Jarrett, J.P. [1984], ‘On the physical significance of the
locality conditions in the Bell arguments’,
*Nous*,**18**, 569. - Lewis, P. [1997], ‘Quantum mechanics, orthogonality and
counting’,
*British Journal for the Philosophy of Science*,**48**, 313. - Pais, A. [1982],
*Subtle is the Lord*, Oxford University Press, Oxford. - Pearle, P. [1976], ‘Reduction of statevector by a nonlinear
Schrödinger equation’,
*Physical Review*,**D13**, 857. - -----. [1979], ‘Toward explaining why events occur’,
*International Journal of Theoretical Physics*,**18**, 489 . - -----. [1989],
*Physical Review*, ‘Combining stochastic dynamical state-vector reduction with spontaneous localization’,**A39**, 2277. - -----. [1990], ‘Toward a Relativistic Theory of Statevector
Reduction’, in
*Sixty-Two Years of Uncertainty*, A. Miller (ed.), Plenum, New York. - Pearle, P., and Squires, E. [1994], ‘Bound-state excitation,
nucleon decay experiments, and models of wave-function
collapse’,
*Physical Review Letters*,**73**, 1. - Penrose, R. [1989],
*The Emperor's New Mind,*Oxford University Press, Oxford. - Peruzzi, G., and Rimini, A. [2000], ‘Compoundation
invariance and Bohmian mechanics’,
*Foundations of Physics*,**30**, 1445. - Rae, A.I.M. [1990], ‘Can GRW theory be tested by experiments
on SQUIDs?’,
*Journal of Physics*,**A23**, 57. - Rimini, A. [1995], ‘Spontaneous Localization and
Superconductivity’, in
*Advances in Quantum Phenomena*, E. Beltrametti*et al*. (eds.), Plenum, New York. - Schrödinger, E. [1935], ‘Die gegenwärtige Situation in
der Quantenmechanik’,
*Naturwissenschaften*,**23**, 807. - Schilpp, P.A. (ed.) [1949],
*Albert Einstein: Philosopher-Scientist*, Tudor, New York. - Shimony, A. [1974], ‘Approximate measurement in
quantum-mechanics. 2’,
*Physical Review*,**D9**, 2321. - -----. [1983], ‘Controllable and uncontrollable
non-locality’, in
*Proceedings of the International Symposium on the Foundations of Quantum Mechanics*, S. Kamefuchi*et al*. (eds), Physical Society of Japan, Tokyo. - -----. [1989], ‘Search for a worldview which can accommodate
our knowledge of microphysics’, in
*Philosophical Consequences of Quantum Theory*, J.T. Cushing and E. McMullin (eds), University of Notre Dame Press, Notre Dame, Indiana. - -----. [1990], ‘Desiderata for modified quantum
dynamics’, in
*PSA 1990*, Volume 2, A. Fine, M. Forbes and L. Wessels (eds), Philosophy of Science Association, East Lansing, Michigan. - Squires, E. [1991], ‘Wave-function collapse and ultraviolet
photons’,
*Physics Letters*,**A 158**, 431. - Stapp, H.P. [1989], ‘Quantum nonlocality and the description
of nature’, in
*Philosophical Consequences of Quantum Theory*, J.T. Cushing and E. McMullin (eds), University of Notre Dame Press, Notre Dame, Indiana. - Suppes, P., and Zanotti, M. [1976], ‘On the determinism of
hidden variables theories with strict correlation and conditional
statistical independence of observables’, in
*Logic and Probability in Quantum Mechanics*, P. Suppes (ed.), Reidel, Dordrecht. - van Fraassen, B. [1982], ‘The Charybdis of Realism:
Epistemological Implications of Bell's Inequality’,
*Synthese*,**52**, 25 (1982).

Department of Theoretical Physics, University of Trieste

*First published: March 7, 2002*

*Content last modified: March 7, 2002*