|This is a file in the archives of the Stanford Encyclopedia of Philosophy.|
Stanford Encyclopedia of Philosophy
Macroscopic processes appear to be temporally “directed” in some sense. The spontaneous evolution of systems is always to a future but not past equilibrium state. The nature of this directedness concerns many deep questions at the foundations of philosophy and science.
Thermodynamics is the science that describes much of the time-asymmetric behavior found in the world. This entry's first task, consequently, is to show how thermodynamics treats temporally ‘directed’ behavior. It then concentrates on the following two questions. (1) What is the origin of the thermodynamic asymmetry in time? In a world possibly governed by time-symmetric laws, how should we understand the time-asymmetric laws of thermodynamics? (2) Does the thermodynamic time asymmetry explain the other temporal asymmetries? Does it account, for instance, for the fact that we know more about the past than the future? The discussion thus divides between thermodynamics being an explanandum or explanans. In the former case the answer will concern philosophy of physics; in the latter case it will concern metaphysics, epistemology, and other fields, though in each case there will be blurring between the disciplines.
Consider the following.
Place some chlorine gas in a small closed flask into the corner of a room. Set it up so that an automaton will remove its cover in 1 minute. Now we know what to do: run. Chlorine is poison, and furthermore, we know the gas will spread reasonably quickly through its available volume. The chlorine originally in equilibrium in the flask will, upon being freed, ‘relax’ to a new equilibrium.
Or less dramatically:
Place an iron bar over a flame for half an hour. Place another one in a freezer for the same duration. Remove them and place them against one another. Within a short time the hot one will ‘lose its heat’ to the cold one. The new combined two-bar system will settle to a new equilibrium, one intermediate between the cold and hot bar's original temperatures. Eventually the bars will together settle to roughly room temperature.
These are two examples of a tendency of systems to spontaneously evolve to equilibrium; but there are indefinitely more examples in all manner of substance. The physics first used to describe such processes is thermodynamics.
First systematically developed in S. Carnot's Reflections on the Motive Power of Fire 1824, the science of classical thermodynamics is intimately associated with the industrial revolution. Most of the results responsible for the science originated from the practice of engineers trying to improve steam engines. Begun in France and England in the late eighteeth and early nineteenth centuries, the science quickly spread throughout Europe. By the mid-nineteenth century, Clausius in Germany and Thompson in England had developed the theory in great detail.
Thermodynamics is a ‘phenomenal’ science, in the sense that the variables of the science range over macroscopic parameters such as temperature and volume. Whether the microphysics underlying these variables are motive atoms in the void or an imponderable fluid is largely irrelevant to this science. The developers of the theory both prided themselves on this fact and at the same time worried about it. Clausius, for instance, was one of the first to speculate that heat consisted solely of the motion of particles (without an ether), for it made the equivalence of heat with mechanical work less surprising. However, as was common, he kept his ontological beliefs separate from his statement of the principles of thermodynamics because he didn't wish to (in his words) “taint” the latter with the speculative character of the former.[1.]
A treatment of thermodynamics naturally begins with the statements it takes to be laws of nature. These laws are founded upon observations of relationships between particular macroscopic parameters and they are justified by the fact they are empirically adequate. No further justification of these laws is to be found -- at this stage -- from the details of microphysics. Rather, stable, counterfactual-supporting generalizations about macroscopic features are enshrined as law. The typical textbook treatment of thermodynamics describes some basic concepts, states the laws in a more or less rough way and then proceeds to derive the concepts of temperature and entropy and the various thermodynamic equations of state. It is worth remarking, however, that in the last forty years the subject has been presented with a degree of mathematical rigor not previously achieved. Originating from the early axiomatization by Caratheodory in 1909, the development of ‘rational thermodynamics’ has clarified the concepts and logic of classical thermodynamics to a degree not generally appreciated. There now exist many quite different, mathematically exact approaches to thermodynamics, each starting with different observational regularities as axioms. (For a popular presentation of a recent axiomatization, see Lieb and Yngvson 2000.)
In the traditional approach classical thermodynamics has two laws, the second of which is our main focus. (Readers may have heard of a ’third law’ as well, but it was added later and is not relevant to the present discussion.) The first law expresses the conservation of energy. The law uses the concept of the internal energy of a system, U(x), which is a function of variables such as volume. For thermally isolated (adiabatic) systems--think of systems such as coffee in a thermos--the law states that this function, U(x), is such that the work W delivered to a system's surroundings is compensated by a loss of internal energy, i.e., dW = -dU. When Joule and others showed that mechanical work and heat were interconvertible, consistency with the principle of energy conservation demanded that heat, Q, considered as a different form of energy, be taken into account. For non-isolated systems we extend the law as dQ = dU + dW, where dQ is the differential of the amount of heat added to the system (in a reversible manner).
The conservation of energy tells us nothing about temporally asymmetric behavior. In particular, it doesn't follow from the first law that interacting systems quickly tend to approach equilibrium (a state where the values of the macroscopic variables remain approximately stable), and once achieved, never leave this state. It is perfectly consistent with the first law that systems in equilibrium leave equilibrium. Since this tendency of systems cannot be derived from the First Law, another law is needed. Although S. Carnot was the first to state it, the formulations of Kelvin and Clausius are standard:
Kelvin's Second Law: There is no thermodynamic process whose sole effect is to transform heat extracted from a source at uniform temperature completely into work.
Clausius' Second Law: There is no thermodynamic process whose sole effect is to extract a quantity of heat from a colder reservoir and deliver it to a hotter reservoir.
Kelvin's version is essentially the same as the version arrived at by both Carnot and Planck, whereas Clausius' version differs from these in a few ways.[2.]
Clausius' version transparently rules out anti-thermodynamic behavior such as a hot iron bar extracting heat from a neighboring cold iron bar. The cool bar cannot give up a quantity of heat to the warmer bar (without something else happening). Kelvin's statement is perhaps less obvious. It stems from the fact familiar from steam engines that heat energy is a ‘poor’ grade of energy. Consider a gas-filled cylinder with a frictionless piston holding the gas down at one end. If we put a flame under the cylinder, the gas will expand and the piston can perform work, e.g., it might move a ball. However, we can never convert the heat energy straight into work without some other effect occurring. In this case, the gas occupies a larger volume.
In 1865 Clausius introduced the notion of the ‘equivalence value’ of a system, a concept that is the ancestor of the modern day concept of entropy. Later in 1865 Clausius used the term ‘entropy’ from the Greek word for transformation. The entropy of a state A, S(A), for instance, is defined as the integral S(A) = dQ/T over a reversible transformation, where O is some arbitrary fixed state. For A to have an entropy, the transformation from O to A must be quasi-static, i.e., a succession of equilibrium states. Continuity considerations then imply that the initial and final states O and A must also be equilibrium states.
In terms of entropy, the Second Law states that in a transformation from equilibrium state A to equilibrium state B, the inequality S(B) - S(A) is greater than or equal to the dQ/T. Loosely put, for realistic systems, this implies that in the spontaneous evolution of a thermally closed system the entropy can never decrease and that it attains its maximum value for states at equilibrium. We are invited to think of the Second Law as driving the gas to its new, higher entropy equilibrium state. Using this concept of entropy, thermodynamics is able to capture an extraordinary range of phenomena under one simple law. Remarkably, whether they are gases filling their available volumes, two iron bars in contact coming to the same temperature, or milk mixing in your coffee, they all have an observable property in common: their entropy increases. Coupled with the First Law, the Second Law is remarkably powerful. It appears that all classical thermodynamical behavior can be derived from these two simple statements (Penrose 1970).[3.]
There are a number of philosophical questions one might ask about the the laws of thermodynamics. For instance, where exactly is time-asymmetry found in the above statement of the Second Law? Why think the Second Law is universal? (See Uffink 2001 for an interesting discussion of these topics.) How are these laws framed in a relativistic universe? Do Lorentz boosted gases appear hotter or colder in the new frame? Surprisingly, the correct (special) relativistic transformation rules for thermodynamic quantities, and thus the relativistic understanding of thermodynamic time asymmetry, is still controversial. With all the current activity of physicists being focused on the thermodynamics of black holes in general relativity and quantum gravity, it is amusing to note that special relativistic thermodynamics is still a field with many open questions, both physically and philosophically. (See Earman 1981 and Liu 1994.)
Another important question concerns the reduction of thermodynamic concepts such as entropy to their mechanical, or statistical mechanical, basis. As even a cursory glance at statistical mechanics reveals, there are many candidates for the statistical mechanical entropy, each the center of a different program in the foundations of the field. Here, again, surprisingly, there is no consensus as to which entropy is best suited to be the reduction basis of the thermodynamic entropy (see Sklar 1993; Callender 1999). Consequently, there is little agreement about what the Second Law looks like in statistical mechanics. Despite the worthiness of these issues, this article will focus on the particularly important problem of the direction of time (though as we'll see, many issues go by this name.)
This ‘problem of the direction of time’ has its source in the debates over the status of the second law of thermodynamics between L. Boltzmann and some of his contemporaries, notably, J. Loschmidt, E. Zermelo and E. Culverwell. Boltzmann sought the mechanical underpinning of the second law. He came up with a particularly ingenious explanation for why systems tend toward equilibrium. Consider an isolated gas of N particles in a box, where N is large enough to make the system macroscopic (N 1023+). For the sake of familiarity we will work with classical mechanics. We can characterize the gas by the coordinates and momenta xin, pin of each of its particles and represent the whole system by a point X = (q,p) in a 6N-dimensional phase space known as , where q = (q1 ... q3N) and p = (p1 ... p3N).
Boltzmann's great insight was to see that the thermodynamic entropy arguably “reduced” to the volume in picked out by the macroscopic parameters of the system. The key ingredient is partitioning into compartments, such that all of the microstates X in a compartment are macroscopically (and thus thermodynamically) indistinguishable. To each macrostate M, there corresponds a volume of , |M|, whose size will depend on the macrostate in question. For combinatorial reasons, almost all of corresponds to a state of thermal equilibrium. There are simply many more ways to be distributed with uniform temperature and pressure than ways to be distributed with nonuniform temperature and pressure. There is a vast numerical imbalance in between the states in thermal equilibrium and the states in thermal nonequilibrium.
We can now introduce Boltzmann's famous entropy formula (up to an additive constant):
SB(M(X)) = k log |M|
where |M| is the volume in associated with the macrostate M, and k is Boltzmann's constant. SB provides a relative measure of the amount of corresponding to each M. Given the mentioned asymmetry in , almost all microstates are such that their entropy value is overwhelmingly likely to increase with time. When the constraints are released on systems initially confined to small sections of , typical systems will evolve into larger compartments. Since the new equilibrium distribution occupies almost all of the newly available phase space, nearly all of the microstates originating in the smaller volume will tend toward equilibrium. Except for those incredibly rare microstates conspiring to stay in small compartments, microstates will evolve in such a way as to have SB increase. Though substantial questions can be raised about the details of this approach, and philosophers can rightly worry about the justification of the standard probability measure on , this explanation seems to offer the correct framework for understanding why the entropy of systems tends to increase with time. (For further explanation and discussion see Bricmont 1996, Callender 1999, Goldstein 2001, Klein 1973 and Lebowitz 1993.)
Before Boltzmann described entropy increase as described above, he proposed a now notorious "proof" known as the "H-theorem" to the effect that entropy must always increase. Loschmidt and Zermelo launched objections to the H-theorem. But an objection in their spirit can also be advanced against Boltzmann's later view sketched above. Loosely put, because the classical equations of motion are time reversal invariant (TRI), nothing in the original explanation necessarily referred to the direction of time. (See Hurley 1985.) Though I just stated the Boltzmannian account of entropy increase in terms of entropy increasing into the future, the explanation can be turned around and made for the past temporal direction as well. Given a gas in a box that is in a nonequilibrium state, the vast majority of microstates that are antecedents of the dynamical evolution leading to the present macrostate correspond to a macrostate with higher entropy than the present one. Therefore, not only is it highly likely that typical microstates corresponding to a nonequilibrium state will evolve to higher entropy states, but it is also highly likely that they evolved from higher entropy states.
Concisely put, the problem is that given a nonequilibrium state at time t2, it is overwhelmingly likely that
(1) the nonequilibrium state at t2 will evolve to one closer to equilibrium at t3
but that due to the reversibility of the dynamics it is also overwhelmingly likely that
(2) the nonequilibrium state at t2 has evolved from one closer to equilibrium at t1
where t1 < t2 < t3. However, transitions described by (2) do not seem to occur; or phrased more carefully, not both (1) and (2) occur. However we choose to use the terms ‘earlier’ and ‘later,’ clearly entropy doesn't increase in both temporal directions. For ease of exposition let us dub (2) the culprit.
The traditional problem is not merely that nomologically possible (anti-thermodynamic) behavior does not occur when it could. That is not straightforwardly a problem: all sorts of nomologically allowed processes do not occur. Rather, the problem is that statistical mechanics seems to make a prediction that is falsified, and that is a problem according to anyone's theory of confirmation.
Many solutions to this problem have been proposed. Generally speaking, there are two ways to solve the problem: eliminate transitions of type (2) either with special boundary conditions or with laws of nature. The former method works if we assume that earlier states of the universe are of comparatively low-entropy and that (relatively) later states are not also low-entropy states. There are no high-to-low-entropy processes simply because earlier entropy was very low. Alternatively, the latter method works if we can somehow restrict the domain of physically possible worlds to those admitting only low-to-high transitions. The laws of nature are the straightjacket on what we deem physically possible. Since we need to eliminate transitions of type (2) while keeping those of type (1) (or vice versa), a necessary condition of the laws doing this job is that they be time reversal noninvariant. Our choice of strategy boils down to either assuming temporally asymmetric boundary conditions or of adding (or changing to) time reversal noninvariant laws of nature. Many approaches to this problem have thought to avoid this dilemma, but a little analysis of any proposed ‘third way’ arguably proves this to be false.
Without changing the TRI laws of nature, there is no way to eliminate transition (2) in favor of (1). Nevertheless, appealing to temporally asymmetric boundary conditions, as we've seen, allow us to describe a world wherein (1) but not (2) occur. A cosmological hypothesis claiming that in the very distant past entropy was much lower will work. Boltzmann, as well as many of this century's greatest scientists, e.g., Einstein, Feynman, and Schroedinger, saw that this hypothesis is necessary given our laws. (Boltzmann, however, explained this low-entropy condition by treating the observable universe as a natural statistical fluctuation away from equilibrium in a vastly larger universe.) Earlier states do not have higher entropy than present states because we make the cosmological posit that the universe began in an extremely tiny section of its available phase space. Albert 2000 calls this the “Past Hypothesis” and provides a detailed discussion of its role in statistical mechanics.
Classical mechanics is also compatiable with a "Future Hypothesis": the claim that entropy is very low in the distant future. The restriction to "distant" is needed, for if the near future were of low-entropy, we would not expect the thermodynamic behavior that we see -- see Cocke 1967, Price 1996 and Schulman 1997 for discussion of two-time boundary conditions.
The main dissatisfaction with this solution is that many do not find it sufficiently explanatory of thermodynamic behavior. That a gas in the lab last Wednesday filled its available volume due to special initial conditions may be credible. But that gases everywhere for all time should expand through their available volumes due to special initial conditions is, for some, incredible. The common cause of these events is viewed as unlikely. Expressing this feeling, Penrose 1989 estimates that the probability, given the standard measure on phase space, of the universe starting in the requisite state is astronomically small. Callender 1997, however, assimilates the problem to the general one facing the special sciences -- all special science laws require conspiratorial initial conditions for their generalizations to hold. If the problem really is a problem, according to Callender, it is not necessarily one specific to thermodynamics and time's direction.
The physicist E. Ritz and others have claimed that electromagnetism accounts for the thermodynamic arrow. The wave equation for both mechanical and electromagnetic processes is well-known to include both ‘advanced’ and ‘retarded’ solutions. The retarded solution
gives the field amplitude ret at r,t by finding the source density r at r at earlier times. The advanced solution
gives the field amplitude in terms of the source density at r at later times. Despite this symmetry nature seems to contain only processes obeying the retarded solutions. (This popular way of stating the electromagnetic asymmetry is actually misleading. The advanced solutions describe the radiation sink's receiving waves, and this happens all the time. The asymmetry of radiation instead lay with the form (concentrated or dispersed) the sources take.)
If we place an isolated concentrated gas in the middle of a large volume, we would expect the particles to spread out in an expanding sphere about the center of the gas, much as radiation spreads out. It is therefore tempting to think that there is a relationship between the thermodynamic and electromagnetic arrows of time. In a debate in 1909, A. Einstein and E. Ritz disagreed about the nature of this relationship. Ritz took the position that the asymmetry of radiation had to be judged lawlike and that the thermodynamic asymmetry could be derived from this law. Einstein's position is instead that "irreversibility is exclusively based on reasons of probability" (Einstein and Ritz 1909, quoted from Zeh 1989, 13). It is unclear whether he meant probability plus the right boundary conditions, or simply probability alone. In any case, Ritz believes the radiation arrow causes the thermodynamic one, whereas Einstein seems to hold something closer to the opposite position.
It seems that Einstein must be right, or at least, closer to being correct than Ritz. Ritz' position appears implausible if only because it implies gases composed of neutral particles will not tend to spread out. That aside, we now think that the wave asymmetry must originate in asymmetric boundary conditions, just as the statistical mechanical asymmetry may. Recall the statistical version of the Second Law. It implies that with the right (improbable) initial conditions a system will undergo improbable-to-probable transitions rather than the reverse. The crucial point to see is that the usual retarded radiation is a kind of improbable-to-probable transition. A concentrated source is improbable, but given its existence, a system will evolve toward more probable regions of the phase space, i.e., the waves will spread. Advanced radiation is likewise a species of improbable-to-probable transitions. Given an improbable source in the past, it will spread backwards in time to more probable regions of the phase space too. Using Popper's famous mechanical wave example as an analogy, throwing a rock into a pond so that waves on the surface spread out into the future requires every bit the conspiracy that is needed for waves to converge on a point in order to eject a rock from the bottom. Both are equally likely, pace Popper; whether one or both happen depends upon the boundary conditions. The real asymmetry lies in the fact that in the past there are concentrated sources for waves, whereas in the future there tend not to be. See Price 1996, Arntzenius 1993, and Frisch 2000 for discussion of this controversial point.
These considerations do not mean the radiation arrow reduces in any sense to the thermodynamic arrow. Rather, the thing to say is that the radiation arrow just seems to be the statistical mechanical one, with the qualification that the media sustaining the improbable-to-probable transition is electromagnetic.
Cosmology presents us with a number of apparently temporally asymmetric mechanisms. The most obvious one is the inexorable expansion of the universe. In cosmology the spatial scale factor a(t), which gives the distance between co-moving observers, is increasing. The universe seems to be uniformly expanding relative to our local frame. Since this temporal asymmetry occupies a rather unique status it is natural to wonder whether it might be the ‘master’ arrow. The cosmologist T. Gold 1962 proposed just this. Believing that entropy values covary with the size of the universe, Gold asserts that at the maximum radius the thermodynamic arrow will ‘flip’ due to the re-contraction. However, as Tolman 1936 has shown in some detail, a universe filled with non-relativistic particles will not suffer entropy increase due to expansion, nor will an expanding universe uniformly filled with blackbody radiation increase its entropy either. Interestingly, Tolman demonstrated that more realistic universes containing both matter and radiation will change their entropy contents. Coupled with expansion, various processes will contribute to entropy increase, e.g., energy will flow from the ‘hot’ radiation to the ‘cool’ matter. So long as the relaxation time of these processes is larger than the expansion time scale, they should generate entropy. We thus have a purely cosmological method of entropy generation.
Others (e.g., Davies 1994) have thought inflation provides a kind of entropy-increasing behavior -- again, given the sort of matter content we have in our universe. The inflationary model is an alternative of sorts to the standard big bang model, although by now it is so well entrenched in the cosmology community that it really deserves the tag ‘standard’. In this scenario, the universe is very early in a quantum state called a ‘false vacuum’, a state with a very high energy density and negative pressure. Gravity acts like Einstein's cosmological constant, so that it is repulsive rather than attractive. Under this force the universe enters a period of exponential inflation, with geometry resembling de Sitter space. When this period ends any initial in-homogeneities will have been smoothed to insignificance. At this point ordinary stellar evolution begins. Loosely associating gravitational homogeneity with low-entropy and inhomogeneity with higher entropy, inflation is arguably another source of cosmological entropy generation.
There are other proposed sources of cosmological entropy generation, but these should suffice to give the reader a flavor of the idea. We shall not be concerned with evaluating these scenarios in any detail. Rather, our concern is about how these proposals explain time's arrow. In particular, how do they square with our earlier claim that the issue boils down to either assuming temporally asymmetric boundary conditions or of adding time reversal non-invariant laws of nature?
The answer is not always clear, owing in part to the fact that the separation between laws of nature and boundary conditions is especially slippery in the science of cosmology. Advocates of the cosmological explanation of time's arrow typically see themselves as explaining the origin of the needed low-entropy cosmological condition. Some explicitly state that special initial conditions are needed for the thermodynamic arrow, but differ with the conventional ‘statistical’ school in deducing the origin of these initial conditions. Earlier low-entropy conditions are not viewed as the boundary conditions of the spacetime. They came about, according to the cosmological schools, about a second or more after the big bang. But when the universe is the size of a small particle, a second or more is enough time for some kind of cosmological mechanism to bring about our low-entropy ‘initial’ condition. What cosmologists (primarily) differ about is the precise nature of this mechanism. Once the mechanism creates the ‘initial’ low-entropy we have the same sort of explanation of the thermodynamic asymmetry as discussed in the previous section. Because the proposed mechanisms are supposed to make the special initial conditions inevitable or at least highly probable, this maneuver seems like the alleged ‘third way’ mentioned above.
The central question about this type of explanation, as far as we're concerned, is this: Is the existence of the low ‘initial’ state a consequence of the laws of nature alone or the laws plus boundary conditions? In other words, first, does the proposed mechanism produce low-entropy states given any initial condition, and second, is it a consequence of the laws alone or a consequence of the laws plus initial conditions? We want to know whether our question has merely been shifted back a step, whether the explanation is a disguised appeal to special initial conditions. Though we cannot here answer the question in general, we can say that the two mechanisms mentioned are not lawlike in nature. Expansion fails on two counts. There are boundary conditions in expanding universes that do not lead to an entropy gradient, i.e., conditions without the right matter-radiation content, and there are boundary conditions that do not lead to expansion, e.g., matter-filled Friedmann models that do not expand. Inflation fails at least on the second count. Despite advertising, arbitrary initial conditions will not give rise to an inflationary period (Earman 1995, pp. 152-3). Furthermore, it's not clear that inflationary periods will give rise to thermodynamic asymmetries (Price 1996, ch. 2). The cosmological scenarios do not seem to make the thermodynamic asymmetries a result of nomic necessity. The cosmological hypotheses may be true, and in some sense, they may even explain the low-entropy initial state. But they do not appear to provide an explanation of the thermodynamic asymmetry that makes it nomologically necessary or even likely.
Another way to see the point is to consider the question of whether the thermodynamic arrow would ‘flip’ if (say) the universe started to contract. Gold, as we said above, asserts that at the maximum radius the thermodynamic arrow must ‘flip’ due to the re-contraction. Not positing a thermodynamic flip while maintaining that entropy values covary with the radius of the universe is clearly inconsistent -- it is what Price 1996 calls the fallacy of a “temporal double standard”. Gold does not committ this fallacy, and so he claims that the entropy must decrease if ever the universe started to re-contract. However, as Albert 2000 writes, "there are plainly locations in the phase space of the world from which ... the world's radius will inexorably head up and the world's entropy will exorably head down". Since that it is the case, it doesn't follow from law that the thermodynamic arrow will flip during re-contraction; therefore, without changing the fundamental laws, the Gold mechanism cannot explain the thermodynamic arrow in the sense we want.
From these considerations we can understand what Price 1996 calls the basic dilemma: either we explain the earlier low-entropy condition Gold-style or it is inexplicable by time-symmetric physics (82). Because there is no net asymmetry in a Gold universe, we might paraphrase Price's conclusion in a more disturbing manner as the claim that the (local) thermodynamic arrow is explicable just in case (globally) there isn't one. However, notice that this remark leaves open the idea that the laws governing expansion or inflation are not TRI. (For more on Price's basic dilemma, see Callender 1998 and Price 1995.)
Quantum cosmology, it is often said, is the theory of the universe's initial conditions. Presumably this entails that its posits are to be regarded as lawlike. Because theories are typically understood as containing a set of laws, quantum cosmologists apparently assume that the distinction between laws and initial conditions is fluid. Particular initial conditions will be said to obtain as a matter of law. Hawking 1987 writes, for example, "we shall not have a complete model of the universe until we can say more about the boundary conditions than that they must be whatever would produce what we observe," (163). Combining such aspirations with the observation that thermodynamics requires special boundary conditions leads quite naturally to the thought that “the second law becomes a selection principle for the boundary conditions of the universe [for quantum cosmology]” (Laflamme 1994, 358). In other words, if one is to have a theory of initial conditions, it would certainly be desirable to deduce initial conditions that will lead to the thermodynamic arrow. This is precisely what many quantum cosmologists have sought.[5.] Since quantum cosmology is currently very speculative, it has been argued that it is premature to start worrying about what it says about time's arrow (Callender 1998). Nevertheless, there has been a substantial amount of debate on this issue (see Haliwell et al, 1994).
Some philosophers have sought an answer to the problem of time's arrow by claiming that time itself is directed. They do not mean time is asymmetric in the sense intended by advocates of the tensed theory of time. Their proposals are firmly rooted in the idea that time and space are properly represented on a four-dimensional manifold. The main idea is that the asymmetries in time indicate something about the nature of time itself. Christensen 1993 argues that this is the most economical response to our problem since it posits nothing besides time as the common cause of the asymmetries, and we already believe in time. A proposal similar to Christensen's is Weingard's 1977 ‘time-ordering field’. Weingard's speculative thesis is that spacetime is temporally oriented by a ‘time potential,’ a timelike vector field that at every spacetime point directs a vector into its future light cone. In other words, supposing our spacetime is temporally orientable, Weingard wants to actually orient it. The main virtue of this is that it provides a time sense everywhere, even in spacetimes containing closed timelike curves (so long as they're temporally orientable). As he shows, any explication of the ‘earlier than’ relation in terms of some other physical relation will have trouble providing a consistent description of time direction in such spacetimes. Another virtue of the idea is that it is in principle capable of explaining all the temporal asymmetries. If coupled to the various asymmetries in time, it would be the ‘master arrow’ responsible for the arrows of interest. As Sklar 1985 notes, Weingard's proposal makes the past-future asymmetry very much like the up-down asymmetry. As the up-down asymmetry was reduced to the existence of a gravitational potential -- and not an asymmetry of space itself -- so the past-future asymmetry would reduce to the time potential -- and not an asymmetry of time itself. Of course, if one thinks of the gravitional metric field as part of spacetime, there is a sense in which the reduction of the up-down asymmetry really was a reduction to a spacetime asymmetry. And if the metric field is conceived as part of spacetime -- which is itself a huge source of contention in philosophy of physics -- it is natural to think of Weingard's time-ordering field as also part of spacetime. Thus his proposal shares a lot in common with Christensen's suggestion.
This sort of proposal has been criticized by both Earman and Sklar on methodological grounds. Sklar 1985, for instance, claims that scientists would not accept such an explanation (111-2). One might point out, however, that many scientists did believe in analogues of the time-ordering field as possible causes of the CP violations.[6.] The time-ordering field, if it exists, would be an unseen (except through its effects) common cause of strikingly ubiquitous phenomena. Scientists routinely accept such explanations. To find a problem with the time-ordering field we need not invoke methodological scruples; instead we can simply ask whether it does the job asked of it. Is there a mechanism that will couple the time-ordering field to thermodynamic phenomena? Weingard says the time potential field needs to be suitably coupled (p. 130) to the non-accidental asymmetric processes, but neither he nor Christensen elaborate on how this is to be accomplished. Until this is addressed satisfactorily, this speculative idea must be considered interesting yet embryonic.
When explaining time's arrow, many philosophers and physicists have focused their attention upon the unimpeachable fact that real systems are open systems that are subjected to interactions of various sorts.[7.] We can not truly isolate thermodynamic systems, and even if we could, it would probably not be for all time. To take the most obvious example, we can not shield a system from the influence of gravity. At best, we can move systems to locations feeling less and less gravitational force, but we can never completely decouple a system from the gravitational field. Not only do we ignore the weak gravitational force when doing classical thermodynamics, but we also ignore less exotic matters, such as the walls in the standard gas in a box scenario. We can do this because the time it takes for a gas to reach equilibrium with itself is vastly shorter than the time it takes the gas plus walls system to reach equilibrium. For this reason we typically discount the effects of the box walls on the gas.
In this approximation many have thought there lies a possible solution to the problem of the direction of time. Indeed, many have thought herein lies a solution that does not change the laws of classical mechanics and does not allow for the nomological possibility of anti-thermodynamic behavior. In other words, advocates of this view seem to believe it embodies a third way.
The idea is to take advantage of what a random perturbation of the representative phase point would do to the evolution of a system. In phase space there is a tremendous asymmetry between the volume of points leading to equilibrium and points leading away from equilibrium. If the representative point of a system were knocked about randomly, then due to this asymmetry, it would be very probable that the system at any given time be on a trajectory leading toward equilibrium. Thus, if it could be argued that the earlier treatment of the statistical mechanics of ideal systems ignored a random perturber in the environment of the system, then one would seem to have a solution to our problems. Even if the perturbation were weak it would still have the desired effect. The weak ‘random’ previously ignored knocking of the environment is the sought after cause of the approach to equilibrium. Prima facie, this answer to the problem escapes the appeal to special initial conditions and the appeal to new laws.
But only prima facie. A number of criticisms have been leveled against this maneuver. One that seems on the mark is the observation that if classical mechanics is to be a universal theory, then the environment must be governed by the laws of classical mechanics as well. The environment is not some mechanism outside the governance of physical law, after all, and when we treat it too, the ‘deus ex machina’ -- the random perturber -- disappears. If we treat the gas-plus-the-container walls as a classical system, it is still governed by time-reversible laws that will cause the same problem as we met with the gas alone. At this point one sometimes sees the response that that combined system of gas plus walls has a neglected environment too, and so on, and so on, until we get to the entire universe. It is then questioned whether we have a right to expect laws to apply universally (Reichenbach 1956, 81ff). Or the point is made that we cannot write down the Hamiltonian for all the interactions a real system suffers, and so there will always be something ‘outside’ what is governed by the time-reversible Hamiltonian. Both of these points rely, we suspect, on an underlying instrumentalism about the laws of nature. Our problem only arises if we assume or pretend that the world literally is the way the theory says; dropping this assumption naturally ‘solves’ the problem. Rather than further address these responses, let us turn to the claim that this maneuver need not modify the laws of classical mechanics.
If one does not make the radical proclamation that physical law does not govern the environment, then it is easy to see that whatever law describes the perturber's behavior, it cannot be the laws of classical mechanics if the environment is to do the job required of it. A time-reversal noninvariant law, in contrast to the TRI laws of classical mechanics, must govern the external perturber. Otherwise we can in principle subject the whole system, environment plus system of interest, to a Loschmidt reversal. The system's velocities will reverse, as will the velocities of the millions of tiny perturbers. ‘Miraculously’, as if there were a conspiracy between the reversed system and the millions of ‘anti-perturbers’, the whole system will return to a time reverse of its original state. What is more, this reversal will be just as likely as the original process if the laws are TRI. A minimal criterion of adequacy, therefore, is that the random perturbers be time reversal noninvariant. But the laws of classical mechanics are TRI. Consequently, if this ‘solution’ is to succeed, it must exercise new laws and modify or supplement classical mechanics. (Since the perturbations need to be genuinely random and not merely unpredictable, and since classical mechanics is deterministic, the same sort of argument could be run with indeterminism instead of irreversibility. See Price 2002 for a diagnosis of why people have made this mistake, and also for an argument objecting to interventionism for offering a ’redundant’ physical mechanism responsible for entropy increase.) [8.]
To the best of our knowledge, our world is fundamentally quantum mechanical, not classical mechanical. Does this change the situation? ‘Maybe’ is perhaps the best answer. Not surprisingly, answers to the question are affected by one's interpretation of quantum mechanics. Quantum mechanics suffers from the notorious measurement problem, a problem which demands one or another interpretation of the quantum formalism. These interpretations fall broadly into two types, depending on their view of the unitary evolution of the quantum state (e.g., evolution according to the Schroedinger equation): they either say that there is something more than the quantum state, or that the unitary evolution is not entirely correct. The former are called ‘no-collapse’ interpretations while the latter are dubbed ‘collapse’ interpretations. This is not the place to go into the details of these interpretations, but we can still sketch the outlines of the picture painted by quantum mechanics (for more see Albert 1992).
Modulo some philosophical concerns about the meaning of time reversal (see Albert 2000, Callender 2000), the equation governing the unitary evolution of the quantum state is time reversal invariant. For interpretations that add something to quantum mechanics, this typically means that the resulting theory is time reversal invariant too (since it would be odd or even inconsistent to have one part of the theory invariant and the other part not). Since the resulting theory is time reversal invariant, it is possible to generate the problem of the direction of time just as we did with classical mechanics. While many details are altered in the change from classical to no-collapse quantum mechanics, the logical geography seems to remain the same.
Collapse interpretations are more interesting with respect to our topic. Collapses interrupt or outright replace the unitary evolution of the quantum state. To date, they have always done so in a time reversal noninvariant manner. The resulting theory, therefore, is not time reversal invariant. This fact offers a potential escape from our problem: the transitions of type (2) in our above statement of the problem may not be lawful. And this has led many thinkers throughout the century to believe that collapses somehow explain the thermodynamic time asymmetry.
Mostly these postulated methods fail to provide what we want. We think gases relax to equilibrium even when they're not measured by Bohrian observers or Wignerian conscious beings. This complaint is, admittedly, not independent of more general complaints about the adequacy of these interpretations. But perhaps because of these controversial features they have not been pushed very far in explaining thermodynamics.
More satisfactory collapse theories exist, however. One, commonly known as GRW, can describe collapses in a closed system -- no dubious appeal to observers outside the quantum system is required. Albert (1994; 2001) has extensively investigated the impact GRW would have on statistical mechanics and thermodynamics. GRW would ground a temporally asymmetric probabilistic tendency for systems to evolve toward equilibrium. Anti-thermodynamic behavior is not impossible according to this theory. Instead it is tremendously unlikely. The innovation of the theory lies in the fact that although entropy is overwhelmingly likely to increase toward the future, it is not also overwhelmingly likely to increase toward the past (because there are no dynamic backwards transition probabilities providied by the theory). So the theory does not suffer from a problem of the direction of time as stated above.
This does not mean, however, that it removes the need for something like the Past Hypothesis. GRW is capable of explaining why, given a present nonequilibrium state, later states should have higher entropy; and it can do this without also implying that earlier states have higher entropy too. But it does not explain how the universe ever got into a nonequilibrium state in the first place. As indicated before, some are not sure what would explain this fact, if anything, or whether it's something we should even aspire to explain. The principal virtue GRW would bring to the situation, Albert thinks, is that it would solve or bypass various troubles involving the nature of probabilities in statistical mechanics.
More detailed discussion of the impact quantum mechanics has on our problem can be found in Albert 2000, North 2002, Price 2002. But if our superficial review is correct, we can say that quantum mechanics will not obviate our need for a Past Hypothesis though it may well solve (on a GRW interpretation) at least one problem related to the direction of time.
Without some new physics that eliminates or explains the Past Hypothesis, or some satisfactory ‘third way’, it seems we are left with a bald posit of special initial conditions. Again, one can question whether there really is anything unsatisfactory about this (Sklar 1993; Callender 1997). But perhaps we were wrong in the first place to think of the Past Hypothesis as a contingent boundary condition. The question ‘why these special initial conditions?’ would be answered with ‘it's physically impossible for them to be otherwise,’ which is always a conversation stopper. Indeed, Feynman (1965, 116) speaks this way when explaining the statistical version of the second law.
Absent a particular understanding of laws of nature, there is perhaps not much to say about the issue. But given particular conceptions of lawhood, it is clear that various judgments about this issue follow naturally -- as we will see momentarily. However, let's acknowledge that this may be to get matters backwards. It might be said that we first ought to find out whether the boundary conditions are lawlike, and then devise a theory of law appropriate to the answer. To decide whether or not the boundary conditions are lawlike based merely on current philosophical theories of law is to prejudge the issue. Perhaps this objection is really evidence of the feeling that settling the issue based on one's conception of lawhood seems particularly unsatisfying. And it is hard to deny this. Even so, it is illuminating to have a brief look at the relationships between some conceptions of lawhood and the topic of special initial conditions.
For instance, if one agrees with Mill that from the laws one should be able to deduce everything and one considers the thermodynamic part of that ‘everything,’ then the special initial condition will be needed for such a deduction. The modern heir of this conception of lawhood, the one associated with Ramsey and Lewis (see Loewer 1994), sees laws as the axioms of the simplest, most powerful, consistent deductive system possible. It is likely that the specification of a special initial condition would emerge as an axiom in such a system, for such a constraint may well make the laws much simpler than they otherwise would be.
We should not expect the naïve regularity view of laws to follow suit, however. On this sort of account, roughly, if As always follow Bs, then it is a law of nature that A causes B. To avoid finding laws everywhere, however, this account needs to assume that As and Bs are instantiated plenty of times. But the initial conditions occur only once.
For more robust realist conceptions of law, it's difficult to predict whether the special initial conditions will emerge as lawlike. Necessitarian accounts like Pargetter's 1984 maintain that it is a law that P in our world iff P obtains at every possible world joined to ours by a nomic accessibility relation. Without more specific information about the nature of the accessibility relations and the worlds to which we're related, one can only guess whether all of the worlds relative to ours have the same special initial conditions. Nevertheless some realist theories offer apparently prohibitive criteria, so they are able to make negative judgments. For instance, ‘universalist’ theories associated with Armstrong say that laws are relations between universals. Yet a constraint on initial conditions isn't in any natural way put in this form; hence it would seem the universalist theory would not consider this constraint lawlike.
Philosophical opinion is certainly divided. The problem is that a lawlike boundary condition lacks many of the features we ordinarily attribute to laws, e.g., multiple instances, governing temporal evolution, etc., yet different accounts of laws focus on different subsets of these features. When we turn to the issue at hand, what we find is the disagreement we expect.
A completely different problem going by the name ‘problem of the direction of time’ is the question of grounding various non-thermodynamic temporal asymmetries (to be described in detail below). In this problem, we take the thermodynamic arrow as given and use it to explain other temporally asymmetric features of the world, e.g., causation, knowledge. Boltzmann famously suggested that many of these asymmetries are given by the direction of entropy increase. And Reichenbach 1956 modified this to some of these temporal asymmetries being given by the direction of dominant entropy increase among all so-called “branch systems.”
Sklar 1985 provides a useful discussion of this topic. He points out that conceiving of the reduction of these temporal asymmetries to that of the entropic arrow evades many of its obvious shortcomings if we conceive of it as a potential a posteriori scientific reduction of the kind now very familiar. The question is then whether it is so reduced (as for instance, the up-down plausibly reduces to the local gravitational gradient) or whether there is merely a correlation between the two (as, for example, there is between left-right and parity violations in high-energy particle physics).
The question is not easily answered partly due to vagueness about what is meant by both the concept to be reduced and the reducing concept. What temporal asymmetries are we concerned with, and exactly what kind of entropic relation do we intend?
The temporal asymmetries with which we are concerend are all the phenomena that we associate with the past and future directions being different. In addition to all of the temporal asymmetries from physics (thermodynamic arrow, electromagnetic arrow, Hubble expansion, etc.), there are a number of different asymmetries with which we are all familiar. The ‘direction of time’ might then be a broad umbrella covering the following:
1. The psychological arrow. This controversial arrow is actually many different asymmetries. One, though much disputed, is that we seem to share a psychological sense of passage through time. Allegedly, we sense a moving ‘now’, the motion of the present as events are transformed from future to past. Another is that we have very different attitudes toward the past than toward the future. We dread future but not past headaches and prison sentences.
2. The mutability arrow. We feel the future is ‘open’ or indeterminate in a way the past is not. The past is closed, fixed for all eternity. Related to this, no doubt, is the feeling that our actions are essentially tied to the future and not the past. The future is mutable whereas the past is not.
3. The epistemological arrow. Although we believe that we know some facts about the future, the vast majority of propositions we claim to know about the past. I know that yesterday's broken egg on the floor had a similar outline to Chile's boundaries, but I have no idea what country tomorrow's broken egg will look like. There are many more traces of events in the future than in the past. When I say something embarrassing, information representing that event is encoded on sound and light waves that form a continually increasing spherical shell in my future light-cone. I am potentially further embarrassed throughout my whole future lightcone. Yet in the backward lightcone stretching from the event there is little or no indication of the unfortunate event.
4. The explanation-causation-counterfactual arrow. This arrow is actually three, though it seems plausible that there are connections among them. Backwards causation may be physically possible, but if it is, it seems either to never happen or be exceedingly rare. Causes typically occur before their effects. Related to the causal asymmetry in some fashion or other is the asymmetry of explanation. Usually good explanations appeal to events in the past of the event to be explained, not to events in the future. It may be that this is just a prejudice that we ought to dispense with, but it is an intuition that we frequently have. Finally, and no doubt this is again related to the other two arrows as well as the mutability arrow, we -- at least naively -- believe the future depends counterfactually on the present in a way that we do not believe the past depends counterfactually on the present.
For example, consider a body moving uniformly from point A to point C in accord with Newton's first law of motion.[9.] A force is impressed on the body at B and the body changes direction and proceeds uniformly towards C.
We will assume the body is a molecule travelling in a relative vacuum, and that the only trace left by the force is the altered path of the body. The solid lines in the diagram represent what we take to be the actual path of the body, the broken lines the alternative paths. Now consider two competing subjunctive conditionals:
If no force had been impressed upon the body at B,
(i) it would have moved uniformly in the right line ABD.
(ii) it would have moved uniformly in the right line EBC.
The problem is to find an objective reason for our preference of (i). It seems that AB is co-tenable with the counterfactual antecedent. If the antecedent were true, it seems the body would have continued from B to D. But BC is also a leg of the actual path of the body, and to what do we appeal besides temporal asymmetry to reject BC as co-tenable with the counterfactual supposition? Perhaps after our intuitions have been tutored by physics we should say that either (i) or (ii) is correct. Or perhaps the asymmetry relies on thermodynamics, in which case the world described above is too bare to support our asymmetry.
Some authors -- particularly defenders of the tensed theory of time -- dismiss out of hand the idea of grounding the direction of time on the direction of material processes in time. But with so many asymmetric processes in the world, and with homo sapiens being just a part of this world, there are strong reasons to favor a connection between the two in many cases. But what is the connection?
Many authors have explicitly or implicitly proposed various ‘dependency charts’ that are supposed to explain which of the above arrows depend on which for their existence. Horwich 1987, for instance, argues for an explanatory relationship wherein the counterfactual arrow depends for its existence on the causal arrow, which depends on the arrow of explanation, which depends on the epistemological arrow, which in turn depends on the fork asymmetry that he associates with some chaotic conditions in the early universe. One can imagine other ways to plausibly arrange the dependency chart. Lewis 1979 thinks an alleged over-determination of traces grounds the asymmetry of counterfactuals and that this in turn grounds the rest. The chart one judges most appropriate will depend, to a large degree, upon one's general philosophical stance on realism and Humeanism, etc., and one's understanding of the above arrows.
Which chart is the correct one is not our concern here. Rather, returning to our main topic, the Boltzmann entropic reduction of time-direction, we now have a somewhat clearer question: do any or all of the above temporal asymmetries depend for their existence upon the thermodynamic time-asymmetry? At the end of his 1979, for instance, Lewis hints that the asymmetry of traces is linked to the thermodynamic arrow, but he can offer no further explanation. Reichenbach 1956, Gruenbaum 1963, and Smart 1967 have developed entropic accounts of the knowledge asymmetry. Various people, for instance Dowe 1992, have tied the direction of causation to the entropy gradient. And some have also tied the psychological arrow to this gradient (for a discussion see Kroes 1985).
One can think of reasons for being quite pessimistic about any straightforward positive link between these temporal asymmetries and the entropy gradient. We really don't know how to bridge the gap between the thermodynamic arrow and the other arrows. And the gap is huge when you start thinking about the science of thermodynamics. Thermodynamics is a science with very precise and definite restrictions on the applicability of its concepts. A system has an entropy, for instance, only when it is thermally isolated and in equilibrium. Yet it is clear that our experience of the above temporal asymmetries carves up the world much differently than thermodynamics does. System A's doing f at time t might cause system B's doing g at time t* (where t* > t), yet A and B may not, and typically will not, have well-defined entropies.
The objections (see Earman 1974, Horwich 1987) to the entropic account of the knowledge asymmetry are worth recalling. The entropic account claimed that because we know there are many more entropy-increasing rather than entropy-decreasing systems in the world (or our part of it), we can infer when we see a low-entropy system that it was preceded and caused by an interaction with something outside the system. To take the canonical example, upon seeing a footprint in the sand, we can infer, due to its high order, that it was caused by something previously also of high (or higher) order, i.e, someone walking.
Though this brief sketch does not do justice to the entropic account, one can still see that it faces some very severe and basic challenges. First, do footprints on beaches have well-defined thermodynamic entropies? To describe the example we switched from low-entropy to high order, but the association between entropy and our ordinary concept of order is teneuous at best and usually completely misleading. To describe the range of systems about which we have knowledge, the account needs something broader than the thermodynamic entropy. But what? And why expect whatever it is to behave like entropy in some respects but not (in terms of its definability) in others? Second, the entropic account doesn't license the inference to a human being walking on the beach. All it tells you is that the grains of sand in the footprint interacted with its environment previously, which barely scratches the surface of our ability to tell detailed stories about what happened in the past. Third, even if we have a broader understanding of entropy, it still doesn't seem that this broader concept always works. Consider Earman's 1974 example of bomb destroying a city. From the destruction we may infer that a bomb went off; yet the bombed city does not have lower entropy than its surroundings or even any type of intuitively higher order than its surroundings.
Boltzmann's suggestion that the temporal asymmetries discussed above are explained by the direction of increasing entropy, though attractive at an abstract level, is hard to maintain when one looks at the details. Still, the more general idea, that these temporal asymmetries are due to the asymmetric behavior of physical processes in our world (whatever their origin ,law or Past Hypothesis) as opposed to more metaphysical sources seems very plausible. Much work remains to be done on this problem.