This is a file in the archives of the Stanford Encyclopedia of Philosophy.

version history

Stanford Encyclopedia of Philosophy

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

This document uses XHTML/Unicode to format the display. If you think special symbols are not displaying correctly, see our guide Displaying Special Characters.
last substantive content change

Molecular Biology

The field of molecular biology studies macromolecules and the macromolecular mechanisms found in living things, such as the molecular nature of the gene and its mechanisms of gene replication, mutation, and expression. Given the fundamental importance of these macromolecular mechanisms throughout the history of molecular biology, it will be argued that a philosophical focus on the concept of a mechanism generates the clearest picture of molecular biology's history, concepts, and case studies utilized by philosophers of science.

This encyclopedia entry is organized around these three themes. First, a historical overview of the developments in molecular biology from its origins to the present pays special attention to the features of this history referenced by philosophers. Philosophical analysis then turns to the key concepts in the field: mechanism, information, and the gene. Finally, philosophers have used molecular biology as a case study to address more general issues in the philosophy of science, such as theory reduction and the integration of fields; the relationship between scientific explanation, laws, and theory structure; and the role of experiments in producing reliable scientific results; each of which are understood most clearly in molecular biology with a focus on the field's attention to mechanisms.

1. History of Molecular Biology

Despite its prominence in the contemporary life sciences, molecular biology is a relatively young discipline, originating in the 1930s and 1940s, and becoming institutionalized in the 1950s and 1960s. It should not be surprising, then, that many of the philosophical issues in molecular biology are closely intertwined with this recent history. This section sketches four facets of molecular biology's development: its origins, its classical problems, its subsequent migration into other biological domains, and its more recent turn to genomics. The rich historiography of molecular biology can only be briefly utilized in this shortened history. (See, for example, Abir-Am 1985, 1987, 1994; Burian 1993; de Chadarevian 2002; de Chadarevian and Gaudilliere 1996; de Chadarevian and Strasser 2002; Holmes 2001; Judson 1980, 1996; Kay 1993; Morange 1998; Olby 1979, 1990, 1994, 2003; Rheinberger 1997; Sapp 1992; Sarkar 1996a; Zallen 1996. Also see autobiographical accounts by biologists, such as Brenner 2001; Cohen 1984; Crick 1988; Echols 2001; Jacob 1988; Kornberg 1989; Luria 1984; Watson 1968, 2002; Wilkins 2003.)

1.1 Origins

The field of molecular biology arose from the convergence of work by geneticists, physicists, and structural chemists on a common problem: the structure and function of the gene. In the early twentieth century, although the nascent field of genetics was guided by Mendel's law of segregation (two alleles of a gene separate, i.e., segregate, during the formation of the germ cells so that each germ cell has one but not the other) and law of independent assortment (genes in different linkage groups assort independently in the formation of germ cells), the actual mechanisms of gene reproduction, mutation and expression remained unknown. Thomas Hunt Morgan and his colleagues utilized the fruit fly, Drosophila, as a model organism to study the relationship between the gene and the chromosomes in the hereditary process (Morgan 1926; discussed in Darden 1991; Darden and Maull 1977; Kohler 1994; Roll-Hanson 1978; Wimsatt 1992). A former student of Morgan's, Hermann J. Muller, recognized the “gene as a basis of life,” and so set out to investigate its structure (Muller 1926). Muller discovered the mutagenic effect of X-rays on Drosophila, and utilized this phenomenon as a tool to explore the size and nature of the gene (Carlson 1966, 1971, 1981; Crow 1992; Muller 1927). But despite the power of mutagenesis, Muller recognized that, as a geneticist, he was limited in the extent to which he could explicate the more fundamental properties of genes and their actions. He concluded a 1936 essay: “The geneticist himself is helpless to analyse these properties further. Here the physicist, as well as the chemist, must step in. Who will volunteer to do so?” (Muller 1936, 214)

Muller's request did not go unanswered. The next decade saw several famous physicists turn their attention to the biological question of inheritance (Keller 1990; Kendrew 1967). In What is Life?, the physicist Erwin Schroedinger (1944) proposed ways in which the principles of quantum physics might account for the stability, yet mutability, of the gene. He speculated that the gene might be a kind of irregular “aperiodic crystal” playing a role in a “hereditary code-script.” This book influenced many younger scientists, physicists as well as biologists, considering new avenues of research (Elitzur 1995; Moore 1989; Olby 1994, 240-247; Sarkar 1991; for a reinterpretation see Kay 2000, 59-66).

A more substantive impact came from the migration of Max Delbrueck into biology. Delbrueck became interested in the physical basis of heredity after hearing a lecture by his teacher, quantum physicist Niels Bohr (1933), which expounded a principle of complementarity between physics and biology. In contrast to Schroedinger, Bohr (and subsequently Delbrueck) did not seek to reduce biology to physics; instead, the goal was to understand how each discipline complemented the other. Delbrueck, with this framework in mind, visited Morgan's fly lab in 1937. But rather than turning his attention to Drosophila, Delbrueck considered even the fruit fly too complex to unravel the unique characteristic of life: self-reproduction. Delbrueck chose to use bacteriophage, viruses that infect bacteria and then can multiply very rapidly. The establishment of “The Phage Group” in the early 1940s by Delbrueck and another physicist-turned-biologist Salvador Luria marked a critical point in the rise of molecular biology (Brock 1990; Cairns et al. 1966; Fischer and Lipson 1988; Fleming 1968; Lewontin 1968; Luria 1984; Morange 1998, Ch. 4; Stent 1968). A famous phage experiment by Alfred Hershey and Martha Chase (1952) tracked the chemical components of phage as they entered bacteria. The results provided evidence, adding to that of Oswald Avery's earlier work on bacteria (Avery et al. 1944), that genes were not proteins but deoxyribonucleic acid (DNA).

While Delbrueck facilitated the collaboration between physicists and biologists, he was largely dismissive of the chemical details that bridged these fields. This was in contrast to Delbrueck's colleague at Cal Tech, Linus Pauling, who utilized his knowledge of structural chemistry to study macromolecular structure. Pauling did both theoretical and experimental work important in the subsequent development of molecular biology. His theoretical work on the nature of the chemical bond supplied an understanding of how large macromolecules could be stable (Pauling 1939). The concept of stable macromolecules, encompassing both proteins and nucleic acids, was a necessary prerequisite to the study of their structure (Olby 1979). Pauling, in contrast to biochemists, investigated weak forms of bonding, such as hydrogen bonding. These weaker forms of bonding were later discovered to play important roles in the structures and functions of proteins and nucleic acids (Crick 1996; Sarkar 1998, Ch. 6). Pauling's lab at Cal Tech also made use of the technique of x-ray crystallography. It provided a means to investigate molecular structure. X-rays bombarding a molecule left unique images on photographic plates due to the diffraction of the x-rays by the molecule. Combining this powerful methodology with the building of scale models, Pauling discovered the alpha-helical structure of proteins (Pauling and Corey 1950; Pauling et al. 1951), and eventually set his sights on the structure of DNA (Pauling and Corey 1953; for historical treatments of this research see Hager 1995; Pauling 1970).

Recognizing quite early the importance of these new physical and structural chemical approaches to biology, Warren Weaver, then the director of the Natural Sciences section of the Rockefeller Foundation, introduced the term “molecular biology” in a 1938 report to the Foundation. Weaver wrote,

And gradually there is coming into being a new branch of science—molecular biology—which is beginning to uncover many secrets concerning the ultimate units of the living cell….in which delicate modern techniques are being used to investigate ever more minute details of certain life processes. (quoted in Olby 1994, 442)

But perhaps a more telling account of the term's origin came from Francis Crick's explanation for why he began calling himself a molecular biologist: “I myself was forced to call myself a molecular biologist because when inquiring clergymen asked me what I did, I got tired of explaining that I was a mixture of crystallographer, biophysicist, biochemist, and geneticist, an explanation which in any case they found too hard to grasp” (quoted in Stent 1969, 36).

Crick mentioned he was, in part, a biochemist. Likewise, Michel Morange (1998) said, “Molecular biology is a result of the encounter between genetics and biochemistry, two branches of biology that developed at the beginning of the twentieth century.” Both molecular biologists and biochemists did (and continue to) work at the same size level and investigate some of the same mechanisms, such as protein synthesis. However, the two fields had different historical trajectories.

The early history of the two fields may be somewhat simplistically divided according to Aristotle's two features of life: biochemistry was concerned with nutrition (recharacterized as metabolism more generally) and molecular biology (along with its more direct predecessor classical genetics) investigated reproduction. In contrast to molecular biology, biochemistry emerged as a field earlier in the twentieth century. It traced its roots to animal chemistry and medical chemistry of the nineteenth century (Kohler 1982). Much of biochemistry's focus (from the perspective of what is important for molecular biology's questions about the genetic material) was on proteins and enzymes. The gene, usually of little concern to biochemists, was thought to be a protein until evidence in favor of DNA began to emerge in the 1940s and 50s. In biochemical textbooks prior to 1953, nucleic acids (DNA and ribonucleic acid, RNA) were relegated to a minor chapter. The discoveries of the twenty-some amino acids, the building blocks of proteins, were major achievements of early twentieth century biochemistry. Items of interest to biochemists were covalent bonding (a strong form of chemical bonding, that connects amino acids in proteins), the action of enzymes (proteins that act as catalysts in biochemical reactions), and the energy requirements for reactions to occur. After Watson and Crick's (1953a) discovery of the structure of DNA, biochemistry showed increased emphasis on nucleic acids (see, e.g., White et al.'s Principles of Biochemistry from 1954 through subsequent editions, e.g., White et al. 1978).

This brief recapitulation of the origins of molecular biology reflects themes addressed by philosophers, such as reduction (see Section 3.1) and the concept of the gene (see Section 2.3). For Schroedinger, biology was to be reduced to the more fundamental principles of physics, while Delbrueck instead resisted such a reduction and sought what made biology unique. Muller's shift from classical genetics to the study of gene structure raises the question of the relation between the classical and molecular concept of the gene. These issues will be examined below.

1.2 Classical Problems

Molecular biology's classical period began in 1953, with James Watson and Francis Crick's discovery of the double helical structure of DNA (Watson and Crick 1953a, 1953b). Watson and Crick's scientific relationship unified various disciplinary approaches: Watson, a student of Luria and the phage group, recognized the need to utilize crystallography to elucidate the structure of DNA; Crick, a physicist enticed by Schroedinger's What is Life? to turn to biology, became trained in, and contributed to the theory of, x-ray crystallography. At Cambridge University's Cavendish Laboratory, Watson and Crick found that they shared interests in genes and the structure of DNA.

In the oft told story (Watson 1968), Watson and Crick collaborated to build a model of the double helical structure of DNA, with its two helical strands held together by hydrogen-bonded base pairs. They made extensive use of data from x-ray crystallography work on DNA by Maurice Wilkins and Rosalind Franklin at King's College, London (Maddox 2002), Crick's theoretical work on crystallography (Crick 1988), and the model building techniques pioneered by Pauling (de Chadarevian 2002; Judson 1996; Olby 1970, 1994).

With the structure of DNA in hand, molecular biology shifted its focus to how the double helical structure aided elucidation of the mechanisms of genetic replication and function, the keys to understanding the role of genes in heredity. This subsequent research was guided by the notion that the gene was an informational molecule. The linear sequence of nucleic acid bases along a strand of DNA provided coded information for directing the linear ordering of amino acids in proteins (Crick 1958). The genetic code came to be characterized as the relation between a set of three bases on the DNA (“a codon”) and one of twenty amino acids, the building blocks of proteins. Attempts to unravel the genetic code included failed theoretical efforts, as well as competition between geneticists (Crick et al. 1961) and biochemists. An important breakthrough came in 1961 when biochemists Marshall Nirenberg and J. Heinrich Matthaei, at the (US) National Institutes of Health (NIH), discovered that a unique sequence of nucleic acid bases could be read to produce a unique amino acid product (Nirenberg and Matthaei 1961; for discussion, see Judson 1996; Kay 2000).

With the genetic code elucidated and the relationship between genes and their molecular products traced, it seemed in the late 1960s that the concept of the gene was secure in its connection between gene structure and gene function. The machinery of protein synthesis translated the coded information in the linear order of nucleic acid bases into the linear order of amino acids in a protein. However, such “colinearity” simplicity did not persist. In the late 1970s, a series of discoveries by molecular biologists complicated the straightforward relationship between a single, continuous DNA sequence and its protein product. Overlapping genes were discovered (Barrell et al. 1976); such genes were considered “overlapping” because two different amino acid chains might be read from the same stretch of nucleic acids by starting from different points on the DNA sequence. And split genes were found (Berget et al. 1977; Chow et al. 1977). In contrast to the colinearity hypothesis that a continuous nucleic acid sequence generated an amino acid chain, it became apparent that stretches of DNA were often split between coding regions (exons) and non-coding regions (introns). Moreover, the exons might be separated by vast portions of this non-coding, supposedly “junk DNA.” The distinction between exons and introns became even more complicated when alternative splicing was discovered the following year (Berk and Sharp 1978). A series of exons could be spliced together in a variety of ways, thus generating a variety of molecular products. Discoveries such as overlapping genes, split genes, and alternative splicing forced molecular biologists to rethink their understanding of what actually made a gene…a gene (Portin 1993).

During this period of the 1970s, molecular biologists developed a variety of techniques for manipulating the genetic material. The recombining of DNA from different species was made possible by the discovery of restriction enzymes that cut DNA at specific sites and ligases that then join these DNA segments. A segment of DNA from one species could be removed and spliced into the DNA of another species. Such transgenic forms aided both theoretical study and practical applications and also caused concern about possible hazards (Krimsky 1982; Watson and Tooze 1981).

These developments in molecular biology have received philosophical scrutiny. Molecular biologists sought to discover mechanisms (see Section 2.1), drawing the attention of philosophers to this concept. Also, conceptualizing DNA as an informational (see Section 2.2) molecule was a move that philosophers have subjected to critical scrutiny. Finally, the concept of the gene (see Section 2.3) itself has intrigued philosophers. Complex molecular mechanisms, such as alternative splicing, have obligated philosophers to consider to what the term “gene” actually refers.

1.3 Going Molecular

In a 1963 letter to Max Perutz, molecular biologist Sydney Brenner foreshadowed what would be molecular biology's next intellectual migration:

It is now widely realized that nearly all the “classical” problems of molecular biology have either been solved or will be solved in the next decade…. Because of this, I have long felt that the future of molecular biology lies in the extension of research to other fields of biology, notably development and the nervous system. (Brenner, letter to Perutz, 1963)

Along with Brenner, in the late 1960s and early 1970s, many of the leading molecular biologists redirected their research agendas, utilizing the newly developed molecular techniques to investigate unsolved problems in other fields.

The discovery of coordinated gene regulation in bacteria by molecular biologists at first appeared to provide a general, theoretical model for transforming descriptive embryology into molecular developmental biology. Francois Jacob, Jacques Monod and their colleagues at the Institute Pasteur in Paris, discovered that three genes were coordinately controlled. Escherichia coli bacteria normally did not make enzymes for metabolizing the sugar in milk, but when placed in a medium with lactose as a food source, genes for metabolizing that sugar were induced. Work on induction showed that the inducer deactivated a repressor, a protein that was bound to the DNA and stopped synthesis of the messenger RNA that produced the enzymes. The group of coordinately controlled genes and their regulatory DNA sites was called an “operon” (Jacob and Monod 1961; discussed in Morange 1998, Ch. 14; Schaffner 1974a). At first, it was assumed that this derepression model might prove to be the way, in general, that genes were controlled in organisms undergoing embryological development. As most cells in developing organisms seemed to have the same amount of DNA, it had long been a puzzle how they differentiated into the many different cell types in the body. However, further work showed that many different forms of gene regulation occurred other than by derepression. Nonetheless, molecular biology aided in embryology “going molecular,” as developmental biologists began the study of gene regulation during embryological development. That work continues today. (For a relatively recent philosophical debate concerning molecular-developmental biology, see Rosenberg 1997 along with critics of this perspective, such as Keller 1999; Laubichler and Wagner 2001; and Robert 2001. More general philosophical discussions can also be found in the entries on developmental biology and evolution and development.)

In addition to developmental biology, the study of behavior and the nervous system lured some molecular biologists. Finding appropriate model organisms that could be subjected to molecular genetic analyses proved challenging. Returning to the fruit flies used in classical genetics, Seymour Benzer induced behavioral mutations in Drosophila as a “genetic scalpel” to investigate the pathways from genes to behaviors (Benzer 1968; Weiner 1999). At Cambridge, Sydney Brenner developed the nematode worm, Caenorhabditis elegans to study the nervous system, as well as the genetics of behavior (Brenner 1973, 2001; Ankeny 2000). Nirenberg used neuroblastomas (malignant tumors composed of undifferentiated neurons) as a model system to study the development of neural tissue (for an online exhibit of Nirenberg's transition to neurobiology, see the National Library of Medicine's Profiles in Science study of Nirenberg, The Marshall W. Nirenberg Papers).

The techniques of molecular biology enabled numerous other fields to go molecular. The study of cells was transformed from descriptive cytology into molecular cell biology (Alberts et al. 1983; Alberts et al. 2002). Molecular evolution developed as a phylogenetic method for the comparison of DNA sequences and whole genomes (Dietrich 1998). The immunological relationship between antibodies and antigens was recharacterized at the molecular level (Podolsky and Tauber 1997; Schaffner 1993). The study of oncogenes in cancer research is just one example of molecular medicine (Morange 1997). However, not all attempts to find the molecular basis of biological phenomena met with early success, such as the claim that RNA molecules coded memories (Morange 1998, Ch. 15).

This expansion of molecular biology as other fields went molecular led some to distinguish molecular genetics from molecular biology. Philosophers (e.g., Kitcher 1984) often use “molecular genetics” synonymously with classical molecular biology. Alternatively, “molecular genetics” may refer only to results produced by cross-breeding variants to produce hybrid organisms (a usage extended from techniques of Mendelian genetics to genetic manipulations in bacteria). Another usage of “molecular genetics” is to refer to any study of genetics at the molecular level, that is any study of the molecular biology of the gene, such as in the new genomic studies.

1.4 Going Genomic

In the 1970s, as many of the leading molecular biologists were migrating into other fields, molecular biology itself was “going genomic.” Genes, it became clear, did not work in isolation; they often interacted with each other (epistasis) and with many other components of the cell, so a more complicated picture of genes within the larger genome was needed.

The genome is a collection of nucleic acid base pairs within an organism's cells (adenine (A) pairs with thymine (T) and cytosine (C) with guanine (G)). The number of base pairs varies widely among species. For example, the flu-causing Haemophilus influenzae has roughly 1.8 million base pairs in its genome (Fleischmann et al. 1995), while the flu-catching Homo sapiens carries more than 3 billion base pairs in its genome (International Human Genome Sequencing Consortium 2001 [“Consortium” hereafter], Venter et al. 2001). The history of genomics is the history of the development and use of new experimental and computational methods for producing, storing, and interpreting such sequence data.

Frederick Sanger played a seminal role in initiating such developments. Sanger developed protein sequencing techniques and used them to elucidate the amino acid sequence of the protein insulin in the mid-1950s. In 1962, Sanger began sequencing nucleic acids and developed increasingly improved techniques in the 1970s. Kary Mullis, inspired in part by Sanger's sequencing methodologies, developed polymerase chain reaction (PCR), a procedure wherein small samples of DNA were amplified (Saiki et al. 1985; for historical treatments of the discovery of PCR, see Rabinow 1996; Morange 1998, Ch. 20). At Harvard, Allan Maxam and Walter Gilbert developed another sequencing method, which proved less efficient than Sanger's (Maxam and Gilbert 1977; a sequencing autobiography can be found in Sanger 1988; also see Culp 1995; de Chadarevian 2002; Judson 1992; Little 2003).

In the mid 1980s, after the development of sequencing techniques, the United States Department of Energy (DoE) originated a project to sequence the human genome (initially as part of a larger plan to determine the impact of radiation on the human genome induced by the Hiroshima and Nagasaki bombings) (Consortium 2001, 862). The resulting Human Genome Project (HGP) (see the entry on The Human Genome Project), managed jointly by the DoE and the NIH, utilized both existent sequencing methodologies and introduced new ones. Indeed, the now-famous controversy that emerged between the public Consortium of international sequencing centers and the private sequencing corporation Celera in their race to generate a “rough draft” of the human genome was predicated on different sequencing methodologies and debates over the accuracy and efficiency of such methodologies (Consortium 2001; Roberts 2001; Venter et al. 2001; Venter 2002; a scholarly history of the HGP has yet to be written, however more popular accounts can be found in Cook-Deegan 1994; Davies 2001; Sulston and Ferry 2002).

While the human genome project has received the most public attention, more than 160 genomes have been sequenced to date (such developments can be monitored at the National Human Genome Research Institute). Experiments in functional genomics can be conducted in these model organisms, which cannot ethically be done in humans. Also, there has been a growing trend for biological disciplines to “go genomic,” including behavioral genetics (Plomin et al. 2003), developmental biology (Srinivasan and Sommer 2002), cell biology (Taniguchi et al. 2002), and evolution (Ohta and Kuroiwa 2002). Genomics has also been institutionalized in genomic textbooks (Cantor and Smith 1999) and journals, such as Genomics and Genome Research. Moreover, technologies for gathering, storing, and processing the huge amount of genomic data continue to be developed and refined.

2. Concepts in Molecular Biology

Key concepts in molecular biology are mechanism, information, and gene, as the following quotations from biologists indicate. Michael Morange in his History of Molecular Biology said that the “heart” of the discipline of molecular biology consists of understanding the “mechanisms of information exchange within cells” (Morange 1998, 176). In a seminal paper announcing the discovery of messenger RNA (the intermediary between the DNA of the gene and the protein for which it carries information), Francois Jacob and Jacques Monod claimed,

The property attributed to the structural messenger of being an unstable intermediate is one of the most specific and novel implications of this scheme…This leads to a new concept of the mechanism of information transfer, where the protein synthesizing centers (ribosomes) play the role of non-specific constituents which can synthesize different proteins, according to specific instructions which they receive from the genes through M-RNA. (Jacob and Monod 1961, 353, emphasis added)

And in a 1980 review of the achievements of molecular biology, Bernard Davis said that the field's central areas of concentration included “molecular information transfer” and “the extraordinary unity in the molecular mechanisms underlying the rich diversity of biology” (Davis 1980, 78). Hence, major tasks for philosophers of molecular biology have been and continue to be analyzing the concepts of mechanism, information, and gene in order to understand how they have been, are, and should be used.

2.1 Mechanism

A number of philosophers of science have argued for the importance of mechanisms in biology. William Wimsatt (1972, 67) said, “At least in biology, most scientists see their work as explaining types of phenomena by discovering mechanisms….” Richard Burian (1993, 389) noted that molecular biology “mainly studies molecular mechanisms.” Stuart Glennan (2002, S344) provided a characterization of mechanism in terms of decomposition into parts and their interactions: “A mechanism for a behavior is a complex system that produces that behavior by the interaction of a number of parts, where the interactions between parts can be characterized by direct, invariant change-relating generalizations.” An alternative to the decompositional account was developed by Peter Machamer, Lindley Darden, and Carl Craver (2000, 4): “Mechanisms are entities and activities organized such that they are productive of regular changes from start or set-up to finish or termination conditions.”

In molecular biological mechanisms, types of entities include macromolecules (such as proteins and the nucleic acids, DNA and RNA), and sub-cellular structures, such as ribosomal particles (composed of RNA and proteins). Types of activities include geometrico-mechanical activities, such as lock and key docking of an enzyme and its substrate, and chemical bonding activities, such as the formation of strong covalent bonds and weak hydrogen bonds. The entities and activities are organized in productive continuity from beginning to end; that is, each stage gives rise to the next. Entities having certain kinds of activity enabling properties allow the possibility of acting in certain ways, and certain kinds of activities are only possible when there are entities having certain activity enabling properties (Darden 2002; Darden and Craver 2002).

Descriptions of mechanisms show how the termination conditions are produced by the set up conditions and intermediate stages. For example, in the mechanism of DNA replication, the DNA double helix unwinds, exposing slightly charged bases to which complementary bases bond, producing, after several more stages, two duplicate helices. A DNA base and a complementary base hydrogen bond because of their geometric structures and weak charges. The organization of the entities and activities determines the ways in which they produce the phenomenon. Entities often must be appropriately located, structured, and oriented, and the activities in which they engage must have a temporal order, rate, and duration. To give a description of a mechanism for a phenomenon is to explain that phenomenon, i.e., to explain how it was produced.

Scientists rarely depict all the particular details when describing a mechanism; representations are usually schematic, often depicted in diagrams. Such representations may be called a “model of a mechanism,” or “mechanism schema.” A mechanism schema is a truncated abstract description of a mechanism that can be instantiated by filling it with more specific descriptions of component entities and activities. An example is James Watson's (1965) diagram of his version of the central dogma of molecular biology:

DNA → RNA → protein.

This is a schematic representation (with a high degree of abstraction) of the mechanism of protein synthesis, which can be instantiated with details of DNA base sequence, complementary RNA sequence, and the corresponding order of amino acids in the protein produced by the more specific mechanism. A mechanism schema can be instantiated to yield a description of a particular mechanism. In contrast, a mechanism sketch cannot (yet) be instantiated; components are (as yet) unknown. Sketches have black boxes for missing components and thus guide work to fill in the details. The general knowledge in molecular biology consists of a set of mechanism schemata, such as those for the mechanisms of DNA replication and repair, protein synthesis, and gene regulation.

2.2 Information

The term “information” is used ubiquitously by molecular biologists. Genes as linear DNA sequences of bases are said to carry “information” for the production of proteins. The information is “translated” from DNA to messenger RNA and then “transcribed” from RNA to protein. The protein then folds into its three-dimensional structure; protein folding is the only part of this mechanism not yet understood. During DNA replication, and subsequent inheritance, it may be said that what is passed from one generation to the next is the “information” in the genes, namely the linear ordering of bases along complementary DNA strands. Historians of biology have tracked the entrenchment of this concept in molecular biology, and philosophers of biology have questioned whether a definition of “information” can be provided that adequately captures its usage in the field.

According to the historian Lily Kay, “Up until around 1950 molecular biologists…described genetic mechanisms without ever using the term information” (Kay 2000, 328). “Information” replaced earlier talk of biological “specificity.” Watson and Crick's second paper of 1953, which discussed the genetical implications of their recently discovered (Watson and Crick 1953a) double-helical structure of DNA, used both “code” and “information”: “…it therefore seems likely that the precise sequence of the bases is the code which carries the genetical information…” (Watson and Crick 1953b, 244, emphasis added).

In 1958, Francis Crick used and characterized the concept of information in the context of stating the central dogma of molecular biology. Crick characterized the central dogma as follows:

This states that once ‘information’ has passed into protein it cannot get out again. In more detail, the transfer of information from nucleic acid to nucleic acid, or from nucleic acid to protein may be possible, but transfer from protein to protein, or from protein to nucleic acid is impossible. Information means here the precise determination of sequence, either of bases in the nucleic acid or of amino acid residues in the protein. (Crick 1958, 152-153, emphasis in original)

Note that, as characterized by Crick, information was not static in the way that, say, coded words on a page were static. Instead, Crick's characterization of information was dynamic; that is, it required a mechanism operating to carry out a task, i.e., “the precise determination of sequence.” Crick also distinguished three different kinds of transfer or flow in the mechanism of protein synthesis: flow of information, flow of matter, and flow of energy.

As a molecular biologist, Crick explicitly focused his attention on flow of information, and not on flow of matter or energy. He discussed biochemical work dealing with matter and energy flow. Again we see one of the primary differences between molecular biology and biochemistry: molecular biology was concerned with genetic information and its role in protein synthesis. Crick emphasized that the nucleic acid sequences determined amino acid sequences and not vice versa. In 1958, it was still an open question how protein synthesis and nucleic acid synthesis operated. Crick's statement about the direction of information flow denied that amino acid sequence could determine the sequence of nucleic acid bases: the flow was one way, from genetic information to protein, but not back.

The central dogma did not go unchallenged. In 1970, an anonymous article in Nature, entitled “Central Dogma Reversed,” discussed the implications of the newly discovered enzyme, reverse transcriptase (Baltimore 1970; Temin and Mizutani 1970). In some viruses, whose genetic material was RNA, this enzyme was found to copy RNA into DNA, which was then inserted into the host genome. This reversal, the article claimed, challenged the “cardinal tenet of molecular biology” that “the flow or transcription of genetic information from DNA to messenger RNA and then its translation to protein is strictly one way” (Anonymous 1970, 1198). Crick (1970) responded that his statement of the central dogma had not been challenged by this finding. The principle problem to which the central dogma was addressed was the finding of “general rules for information transfer from one polymer [a long chain molecule] to another” (Crick 1970, 561). Crick pointed out that the denial of flow of information from proteins back to nucleic acid still held, even if the nucleic acid RNA could be complementarily copied to the nucleic acid DNA. A narrow scope mechanism schema, with a reverse arrow from RNA to DNA, was added to the more widely found DNA to RNA one, qualifying Watson's diagrammatic representation (discussed in Darden 1995).

In addition to its use in seminal papers, “information” was found throughout textbooks in the field. Information was said to be “transferred” from DNA to RNA templates to proteins during protein synthesis (e.g., Watson 1965, 297). “Nucleic Acids Convey Genetic Information” was a chapter title in Watson et al.'s (1988, Ch. 3) textbook, Molecular Biology of the Gene. Again: “The translation of genetic information from the 4-letter alphabet of polynucleotides into the 20-letter alphabet of proteins is a complex process” (Alberts et al. 2002, 8).

It is important not to confuse the genetic code and genetic information. The genetic code refers to the relation between three bases of DNA, called a “codon,” and one amino acid. Tables available in molecular biology textbooks (e.g., Watson et al. 1988, frontispiece) show the relation between 64 codons and 20 amino acids. For example, CAC codes for histidine. Only a few exceptions for these coding relations have been found, in a few anomalous cases (see the list in a small table in Alberts et al. 2002, 814). In contrast, genetic information refers to the linear sequence of codons along the DNA, which (in the simplest case) are transcribed to messenger RNA, which are translated to linearly order the amino acids in a protein. Many exceptions to the colinearity hypothesis have been found (as discussed in Section 1.2 above), such as in split and overlapping genes.

The usage of “information” in the mathematical theory of communication is too impoverished to capture the molecular biological usage (for critiques see Sarkar 1996b, 1996c; Sterelny and Griffiths 1999, 101-104) and the usage in cognitive neuroscience, with its talk of “representations” (e.g., Crick 1988, 154-155) may be said to be too rich. The coded sequences in the DNA are more than just a signal with some number of bits that may or may not be accurately transmitted, yet they are not said to have within them a representation of the structure of the protein. (The way in which the linear order of amino acids in a protein determine its three dimensional structure is still an unsolved problem; however, even if these rules were known, it is doubtful that molecular biology would use the language of “representation.”) No definition of “information” as it is used in molecular biology has yet received wide support among philosophers of biology.

Stephen Downes distinguished three positions on the relation between information and the natural world:

  1. Information is present in DNA and other nucleotide sequences. Other cellular mechanisms contain no information.
  2. Information is present in DNA, in other nucleotide sequences and other cellular mechanisms, for example cytoplasmic or extra-cellular proteins; and in many other media, for example, the embryonic environment or components of an organisms' wider environment.
  3. DNA and other nucleotide sequences do not contain information, nor do any other cellular mechanisms. (Downes, forthcoming)

These options may be read either ontologically or heuristically. Much of the philosophical discussion of information centered on ontological debates. A heuristic reading of (1) viewed the talk of information in molecular biology as useful in providing a way of talking and in guiding research (Downes, forthcoming).

Philosophical work continues, first, to find an adequate characterization of “information” as it is used in molecular biology; second, to distinguish mechanisms in which information is said to be transferred (such as DNA replication and protein synthesis) from those in which it is not (such as many metabolic reactions); and third, to answer the question of whether something appropriately called “information” is to be found in molecules and mechanisms (For recent attempts to address these lingering issues, see Sarkar's (2005, section 10.3) application of semiotic information and the forthcoming entry on biological information).

2.3 Gene

The question of whether classical genetics could be (or already has been) reduced to molecular biology (to be taken up below) motivated philosophers to consider the connectibility of the term they shared: the gene. Investigations of reduction and scientific change raised the question of how the concept of the gene evolved over time, figuring prominently in Philip Kitcher's (1982, 1984) and Raphael Falk's (1986) work. Falk asked philosophers and historians of biology, “What is a Gene?” Falk drew on Kenneth MacCorquodale and Paul E. Meehl's distinction between quantities that can be obtained by manipulating values of empirical variables without hypothesizing the existence of unobserved entities or processes (dubbed “intervening variables”) and concepts which assert the existence of entities and the occurrence of events not reducible to the observable (dubbed “hypothetical constructs”) (MacCorquodale and Meehl 1948). Employing this distinction, Falk claimed that the gene began as an intervening variable but morphed into a hypothetical construct with Morgan's chromosomal theory of inheritance and then with molecular biology, when the gene became equated with a sequence of DNA. Also, C. Kenneth Waters' analysis of the gene concept (1990, 1994) makes sense only in light of the reduction debate (see the entries on genetics, molecular genetics).

Philosophical discussions of the gene concept gradually broke free from the reduction literature. Discoveries such as overlapping genes, split genes, and alternative splicing (discussed above) made it clear that simply equating a gene with an uninterrupted stretch of DNA would no longer capture the complicated molecular-developmental details of mechanisms such as gene expression. Also, through the development of molecular knockout experiments, a picture of how genes functionally interacted with each other throughout the processes of gene regulation and gene expression emerged (for an introduction to such research and its implications, see Culp 1997; Keller 2000; Morange 2001; Tabery 2002).

In light of the enormous complexity found in the process of moving from a stretch of DNA to a polypeptide chain (a string of amino acids and often a part of a protein), Falk's (1986) question persists: What is a gene? Two general trends have emerged in the philosophical literature to answer this question and to accommodate the molecular-developmental phenomena: first, distinguish multiple gene concepts to capture the complex structural and functional features separately, or second, rethink a unified gene concept to incorporate such complexity. (For a broader categorization of the philosophical responses to this question, incorporating analyses of the evolutionary gene concept in addition to gene concepts pertaining to molecular biology, see Falk 2000.)

A paradigmatic example of the first line came from Lenny Moss's distinction between Gene-P and Gene-D (Moss 2001, 2002). Gene-P embraced an instrumental preformationism (providing the “P”); it was defined by its relationship to a phenotype. In contrast, Gene-D referred to a developmental resource (providing the “D”); it was defined by its molecular sequence. An example will help to distinguish the two: It was found that cystic fibrosis, a common lethal genetic disease in the US, resulted from an abnormality in cellular membrane proteins that functioned to transport chloride between cells and the extracellular fluid (for an overview of this research, see Collins 1992). The result was an imbalance in extracellular chloride, generating the tell-tale mucus that coats victims' cells, potentially generating deadly infections. When one talked about the gene for cystic fibrosis, the Gene-P concept was being utilized; the concept referred to the ability to track the transmission of this gene from generation to generation as an instrumental predictor of cystic fibrosis, without being contingent on knowing the causal pathway between the particular sequence of DNA and the ultimate phenotypic disease. The Gene-D concept, in contrast, referred instead to just one developmental resource (i.e., the molecular sequence) involved in the complex development of the disease, which interacted with a host of other such resources (proteins, RNA, a variety of enzymes, etc.); Gene-D was indeterminate with regards to the ultimate phenotypic disease. Moreover, in cases of other diseases where there are different disease alleles at the same locus, a Gene-D perspective would treat these alleles as individual genes, while a Gene-P perspective treats them collectively as “the gene for” the disease.

Keller (2000), like Moss, also suggested a conceptual distinction for the gene concept, claiming that genes can be thought of as two kinds of entities: a structural entity and a functional entity. The structural entity was what was maintained by the molecular machinery of the cell for transmission to the next generation; the functional entity emerged from the complicated developmental process, in which the structural entity was but one of many players (Keller 2000, 70-72).

A second philosophical approach for conceptualizing the gene involved rethinking a single, unified gene concept that captured the molecular-developmental complexities. For example, Eva Neumann-Held (Neumann-Held 1999, 2001; Griffiths and Neumann-Held 1999) claimed that a “process molecular gene concept” (PMG) embraced the complicated developmental intricacies. On her unified view, the term “gene” referred to “the recurring process that leads to the temporally and spatially regulated expression of a particular polypeptide product” (Neumann-Held 1999). Returning to the case of cystic fibrosis, a PMG for an individual without the disease referred to one of a variety of transmembrane ion-channel templates along with all the epigenetic factors involved in the generation of the normal polypeptide product. And so cystic fibrosis arose when a particular stretch of the DNA sequence was missing from this process.

Falk (2001) characterized the concept of the gene in the same unifying vein, but with more emphasis on the fundamental role of the DNA sequence in a process such as gene expression. Falk's gene concept referred to a DNA sequence that corresponded to a single norm of reaction for various molecular products based on the varying epigenetic conditions. A norm of reaction is a relationship between a genetic variable and a changing environment; Falk applied this concept to the gene, making the genetic variable the DNA sequence and the changing environment the epigenetic conditions. Hence, the epigenetic conditions involved in a process were important to recognize in so far as they affected the different molecular products that could be generated from the DNA sequence.

Philosophers and historians of biology have not yet reached a consensus in answer to Falk's (1986) question: what is the gene? This fact has elicited a range of reactions. Rheinberger (2000) agreed that the gene concept was fuzzy but welcomed the imprecision; the gene was fruitful as an object of research in flux because the concept also remained operationally in flux (see Rheinberger and Mueller-Wille's entry on gene). Paul Griffiths, meanwhile, in a review of the volume in which Rheinberger's essay appears, deemed the gene concept “Lost” but offered a “Reward to Finder” (Griffiths 2002; Beurton, Falk, and Rheinberger 2000). In fact, Griffiths and Karola Stotz are currently leading a philosophical search party of sorts. The “Representing Genes” project includes a group of philosophers and historians of biology who are attempting to operationalize some of the various philosophical claims about the gene concept discussed above and then test those claims. The Representing Genes project can be monitored at its website: Representing Genes: Testing Competing Philosophical Analyses of the Gene Concept in Contemporary Molecular Biology

3. Molecular Biology and General Philosophy of Science

In addition to analyzing key concepts in the field, philosophers have employed case studies from molecular biology to address more general issues in the philosophy of science. The issue of reduction was addressed by considering whether classical genetics had been reduced to molecular biology. Cases from molecular biology have been used to analyze the issues of laws, theory structure, explanation, and experimentation. For each of these philosophical issues, it will be argued, evidence from molecular biology directs philosophical attention toward understanding the concept of a mechanism for addressing the topic.

3.1 Theory Reduction and Integration of Fields

Reflecting on the historical origins of molecular biology discussed above, it should come as no surprise that the field appeared to many philosophers of science to offer an ideal case of reduction. Molecular biology emerged out of the search for the structure and function of the gene, so might the older field of classical genetics be (or have been) simply reduced to a successor—molecular biology?

Classical genetics had two laws, which, at first, seemed likely candidates for reduction to (derivation from) molecular laws. Based on patterns of inheritance of characters during breeding experiments, classical geneticists inferred regularities in the behavior of genes. These regularities were captured in Mendel's laws of segregation and independent assortment (as described in Section 1.1 above). The formal reduction of classical genetics to molecular biology required that these classical laws be logically deduced from laws of molecular biology. However, it was not possible to identify anything in molecular biology that was called a “law” or that played a role sufficient to allow logical derivation of Mendel's laws. Alternative analyses of the relation between classical genetics and molecular biology have included claims of replacement, informal reduction, explanatory extension, as well analysis of different mechanisms investigated in the two fields.

Kenneth Schaffner used and developed Ernst Nagel's (1961, Ch. 11) analysis of derivational theory reduction to argue for the reduction of classical Mendelian genetics (T2) to molecular biology (T1) and refined it over many years (summarized in Schaffner 1993). The goal of formal reduction was to logically deduce the laws of classical genetics (or its improved successor, “modern transmission genetics” T2*) from the laws of molecular biology. Such a derivation required that all the terms of T2* not in T1 had to be connected to terms in T1 via “correspondence rules.” Hence, Schaffner endeavored to find molecular equivalents of such terms as “gene,” as well as “predicate terms,” such as “is dominant.” (One allele of a gene is said to be “dominant” over another if the character associated with that allele appears in the hybrid offspring of a cross between pure breeding parents with the two alleles. For example, in a cross between tall and short pea plants, tall is dominant over short.)

David Hull (1974) criticized formal reduction, argued against Schaffner's claims, and suggested, instead, that perhaps molecular biology replaced classical genetics. Hull's critiques focused on the problem of connectibility of terms. A close look at their dispute showed that it hinged on debates about mechanisms of gene expression. Hull said:

Even if all gross phenotypic traits are translated into molecularly characterized traits, the relations between Mendelian and molecular predicate terms express prohibitively complex, many-many relations. Phenomena characterized by a single Mendelian predicate term can be produced by several different types of molecular mechanisms. Hence any reduction will be complex. Conversely, the same types of molecular mechanisms can produce phenomena that must be characterized by different Mendelian predicate terms. Hence, reduction is impossible. (Hull 1974, 39)

Schaffner criticized Hull for claiming that the same molecular mechanism could give rise to phenomena labeled with different Mendelian terms. “Different molecular mechanisms can appropriately be invoked in order to account for the same genetically characterized relation, as the genetics is less sensitive. The same molecular mechanisms can also be appealed to in order to account for different genetic relations, but only if there are further differences at the molecular level,” such as different initial conditions (Schaffner 1993, 444). Empirical investigation was required to determine the molecular mechanisms of gene expression (Schaffner 1993, 439).

Such connectibility of terms and logical derivation of laws of one theory from that of another, required by formal reduction, were peripheral to the concerns of scientists, as both Schaffner (1974b) and Hull (1974) realized. The idealized formal reduction relation, even if it could have been imposed on some version of the historically developing fields (or some logical reconstruction of their theories), did not serve to capture the practice of scientists.

Darden and Maull (1977) focused attention on the bridges between fields as an important locus of new discoveries in science. The bridges might be identities (required of correspondence rules in the formal reduction model), but they might also be other kinds of relations. Interfield relations included part-whole relations (e.g., genes are parts of chromosomes), structure-function relations (e.g., an identified molecule functions as the repressor in gene regulation), and cause and effect relations. Sometimes the relations were elaborated in an “interfield theory,” such as the chromosome theory of Mendelian heredity (genes as parts of chromosomes). What was important was to find the interfield relations, not to formally derive anything from anything. However, Darden and Maull missed the importance of mechanisms in their analysis.

Also moving beyond debates about formal theory reduction, Kitcher (1984; 1989; 1999) and Waters (1990) advanced the discussion about the relations between the fields. Kitcher criticized a reductive approach. Waters defended informal aspects of reduction. Utilizing an analysis of theory structure in terms of argument schemata (see Section 3.2), Kitcher argued that the relation between Mendelian and molecular genetics was “explanatory extension” (Kitcher 1984, 371). The theory of molecular genetics provided a refined and expanded set of premises when compared to the argument schemata of classical genetics (Kitcher 1989, 440-442). However, classical genetics retained its own schema. For example, the independent assortment of genes (Mendel's second law) was explained, according to Kitcher, by instantiating a pairing and separation schema, thereby showing that chromosomal pairing and separation was a unifying natural kind (Kitcher 1999). Such unification would be lost if attention were focused on the gory details at the molecular level. The cytological level thus constituted an “autonomous level of biological explanation” (Kitcher 1984, 371). On the other hand, to solve problems of gene replication, mutation, and action, the gory molecular details were required, and were part of the expanded premise set of the schema labeled “Watson-Crick” (Kitcher 1989, 441). Waters (1990) countered that the gory molecular details of chromosomal mechanisms were informative. Waters thus defended “informal reduction,” one aspect of which was that the lower level provided greater understanding of chromosomal mechanisms.

As early as 1976, Wimsatt (1976) argued for a shift in the reduction debate from talk of relations between theories to talk of decompositional explanation via mechanisms. An alternative to the decompositional view of mechanisms analyzed hereditary mechanisms in terms of their entities and activities that bottom out at different size levels (Machamer, Darden, Craver 2000). According to Darden (forthcoming), the fields of classical genetics and molecular biology investigated serially connected mechanisms with different working entities that operate at different times in (what is now known to be) an integrated temporal series of hereditary mechanisms. Previous philosophical accounts missed the importance of these temporal relations among different mechanisms.

Classical geneticists discovered the “mechanism of Mendelian heredity,” (Morgan et al. 1915). This mechanism, which produced the phenomena of gene segregation and independent assortment, was discovered, not by decomposing genes into their parts, but by finding the wholes on which the parts were riding. Genes, Mendelian geneticists showed, were parts of chromosomes. The phenomena sketched in Mendel's laws of segregation and independent assortment were (and are) explained by the behavior of chromosomes because the chromosomes were found to be the working entities of the mechanisms of chromosomal pairing and separation during the formation of germ cells.

Molecular biologists discovered different mechanisms that operate before and after chromosomal pairing and separation, were found to have different working entities of different sizes, and required molecular techniques rather than Mendelian/cytological ones to find them. The discovery of the DNA double helix allowed the elucidation of the mechanism of gene reproduction and aided in understanding mechanisms of gene expression. Molecular biologists showed that DNA replication (with occasional errors producing mutations) was a first step in the chromosomes duplicating, which occurred prior to pairing and separation of the chromosomes. Gene expression, leading to the production of phenotypic characters, occurred after mating. Gene expression included the mechanism of protein synthesis, with the transcription of DNA sequence into messenger RNA and the translation of RNA in the amino acid sequence of proteins. Gene expression also included the many mechanisms of gene regulation operative during the expression of genes during embryological development, which biologists continue to seek. Mechanisms of DNA replication and protein synthesis showed remarkable unity in all living things, both in those with organized chromosomes and those without (such as bacteria), in sexually and asexually breeding organisms, and in plants and animals.

The working entities differed in the hereditary mechanisms discovered by classical genetics and molecular biology. The working entities of these temporally successive mechanisms were found to be the entire DNA double helix in DNA replication, the chromosomes during germ cell formation (with most genes packed away and inactive), and segments of DNA (genes) playing roles in the mechanisms of gene expression, such as protein synthesis and gene regulation.

Integration of a temporal series of hereditary mechanisms, Darden (forthcoming) argued, was the appropriate way to characterize the relations between classical genetics and molecular biology. This mechanistic analysis better captured, she claimed, the practice of biologists, with their frequent talk of mechanisms, than the analyses of the relations between the fields in terms of formal derivational theory reduction, informal reduction, replacement, and explanatory extension via expanded argument schemata. Progress in twentieth century genetics occurred by discovering mechanisms with different working entities of different sizes and integrating these mechanisms into a temporal series of hereditary mechanisms.

3.2 Laws, Theories, and Explanation in Molecular Biology

Traditional problems in philosophy of science have been to understand the structure of scientific theories, as well as the nature of scientific laws (see the entries on on laws of nature and scientific explanation). The traditional analyses viewed scientific theories as sets of laws, which were universal (applying throughout the universe), general (exceptionless), and necessary (not contingent). Explanation of particular observation statements was analyzed as subsumption under (derivation from) general laws plus the initial conditions of the particular case. Philosophers of biology have criticized these traditional analyses as inapplicable to biology, especially molecular biology.

Since the 1960s, philosophers of biology have questioned the existence of biological laws of nature. J. J. C. Smart (1963) emphasized the “Earth-boundedness” of the biological sciences (in conflict with the universality of natural laws). John Beatty (1995) argued that purported “laws” of, for example, Mendelian genetics, were contingent (in conflict with the necessity of natural laws); the chromosomal apparatus that produces those regularities had evolved, along with sexually breeding organisms. No purported “law” in biology has been found to be exceptionless, even for life on earth (in conflict with the generality of laws). (For further discussion, see Waters 1998.) Hence, philosophers' search for biological laws of nature, characterized as universal, necessary generalizations, has ceased.

In the philosophy of science, two views about explanation (see entry on scientific explanation) and theory structure now compete: the unificationist and the causal/mechanical. Arguing against the causal/mechanical view, Philip Kitcher (1989, 1993) developed an unificationist account of explanation accompanied by an “argument schemata” view of theory structure. He and Culp explicitly applied it to molecular biology (Culp and Kitcher 1989). Among the premises of the “Watson-Crick” schema, for example, were “transcription, post-transcriptional modification and translation for the alleles in question,” along with details of “cell biology and embryology” for the organisms in question (Kitcher 1989, 440-442). An explanation of a particular pattern of distribution of progeny phenotypes in a genetic cross resulted from instantiating the appropriate schema: the variables were filled with the details from the particular case and the conclusion derived from the premises. This unificationist tradition was thus a descendant of the deductive-nomological view in which explanation resulted from deduction of the conclusion from initial conditions and generalizations (general premises for Kitcher).

Appealing to both accounts of explanation, Schaffner (1993) argued that molecular biology (along with other biomedical fields, such as immunology and neurobiology) was largely composed of “middle-range” theories, such as the operon theory of gene regulation (see Section 1.3). Such theories, Schaffner claimed, were presented as “temporal models,” namely “collections of entities that undergo a process” (Schaffner 1993, 98). These models, he said, were “interlevel,” involving entities at different levels of organization. They were “middle-range” in their scope of applicability among organisms between universal theories and unique particular findings. Such variability was to be expected, given the evolutionary perspective of selection for variations under changing environmental conditions. Despite the lack of universal laws, such theories, Schaffner claimed, did contain causal generalizations, illustrated by the phrase “same cause (or initial conditions and mechanisms), same effect” (Schaffner 1993, 121). Hence, middle range theories, he argued, supported counterfactuals, organized knowledge, and were testable—features that one expects of theories (Schaffner 1993, 119-126). “Thus the explanans, the explaining generalizations in such an account, will be a complex web of interlevel causal generalizations of varying scope” (Schaffner 1993, 286). This variability in scope “does not negate the nomic force of the generalizations and the mechanisms” (Schaffner 1993, 294).

Working in Wimsatt's (1976) tradition of emphasizing the decomposition of complex systems into interacting parts, Sahotra Sarkar also appealed to both general rules and their roles in molecular mechanisms. He argued that explanation in molecular biology consisted in “physical reductionism,” which was “traditionally called ‘mechanism’” (Sarkar 1998, 137). The behaviors of molecular biological systems were explained by the spatial organization (and interactions) of the constituent parts governed by more fundamental generalizations, dubbed “F-rules” of “macromolecular physics.” These rules “are peculiar to molecular biology” and “provide it with a theoretical framework of its own.” Included in the rules were the role of weak chemical interactions, the determination of molecular function based on molecular structure, and the prevalence of lock-and-key fit in molecular interactions (Sarkar 1998, 149-150).

In contrast to the decompositional view of mechanisms, and in contrast to Kitcher's argument schemata, Machamer, Darden, and Craver (2000) argued that what played the role of theory in molecular biology were a set of mechanism schemata. A phenomenon was explained, they argued, by instantiating a schema to show how the phenomenon resulted from a productively continuous mechanism. In contrast to the decompositional view, they did not solely emphasize the interactions of entities within a mechanism, but argued for a dualistic perspective embracing both entities and activities. Activities, they argued, were the producers of change and, as such, played a fundamental role in causal/mechanical explanation (Machamer, Darden, Craver 2000, 3). James Tabery (2004) emphasized the dynamicity captured by activity-talk, which differentiated a decomposed system of interacting parts from a dynamic mechanism engaged in a temporal process. Among the kinds of activities in molecular biological mechanisms, Machamer, Darden, and Craver included the formation of weak chemical bonds and the geometrico-mechanical lock and key docking of an enzyme and its substrate. They thus called attention to the active aspect of the features whose behavior Sarkar (1998, 149-150) had emphasized with his “weak interactions rule” and “lock and key fit for molecular interactions.”

To explain a phenomenon, Machamer, Darden and Craver argued, was to show how that phenomenon was produced by a mechanism, not to subsume it under a generalization, as Schaffner suggested, nor to show how it conforms to a rule, as Sarkar suggested. The scope of a mechanism schema's domain had to be determined in order to know whether it could be instantiated in a particular instance to provide an explanation. Furthermore, molecular biological mechanism schemas have domains of varying scope, as Schaffner noted. It was found empirically that, for example, the schema RNA → DNA → RNA → protein had a small scope, applicable only to retroviruses. That finding licensed its use for explaining protein synthesis in the retroviral domain. Rather than unification (as Kitcher claimed), finding a productively continuous mechanism of whatever scope provided molecular biological explanations.

3.3 Experimentation

Molecular biology has also shed light on the role of experiments and techniques in producing reliable results, another general topic in the philosophy of science. The debate about realism and antirealism provided one context in which the role of experiments was explored. At issue was whether science provides reliable knowledge about the real world or whether scientific knowledge is socially constructed. Both Schaffner (1995) and Sylvia Culp (1995) argued for the use of multiple techniques to produced reliable results. Schaffner (1995) discussed “direct evidence” for, and the “stability” of, experimental results, using examples from molecular immunology. Culp (1995) argued for reliable results based on the use of different techniques that produce compatible results. Her example was the use of different methods for sequencing DNA, as discussed above. Schaffner's and Culp's arguments supplemented that of Wimsatt (1981) on robustness, namely the credentialing of results by obtaining them via independent techniques.

Rheinberger (1997) argued for the importance of experimental systems as the unit of analysis in the history of molecular biology and biochemistry. With a focus on discovery, he emphasized the way that exploration of experimental systems lead to unprecedented findings, to new “epistemic things.” Burian (1997) queried Rheinberger as to how objects discovered in two different experimental systems could be known to be the same epistemic thing, a problem similar to that raised by realists against the older operationalism.

Reasoning and experimental strategies for discovering mechanisms have been the focus of recent work. William Bechtel and Robert Richardson (1993) elaborated the strategies of decomposing a system and localizing subcomponents to find the mechanism that produced the behavior of the system. They discussed experiments on connecting the genetics of eye pigment inheritance in fruit flies with its decomposition to find the biochemical mechanisms of eye pigment production. These experiments, they claimed, forced a reconceptualization of “Mendelism's one gene—one trait” to a lower level of analysis of “one gene—one enzyme” (Bechtel and Richardson 1993, 193).

When one has merely a mechanism sketch and many components are unknown, one may attempt to isolate components by literally decomposing a biological system and searching for the relevant activities. For example, Rheinberger (1997) discussed the use of centrifuge techniques for “draining the biochemical bog” during the discovery of components of the mechanism of protein synthesis. Centrifuge fractions were examined for their activity during the construction of an experimental system of in vitro protein synthesis. Intertwined theoretical and experimental work, utilizing in vitro protein synthesis systems, led to the discovery of various RNAs and their roles in protein synthesis (Darden and Craver 2002).

When one has a “how possibly” mechanism schema, various experimental strategies can be used to transform it to an account of “how plausibly” and then “how actually” the mechanism operates. Craver (2002) discussed experimental strategies for testing a hypothesized mechanism. Such experiments have three basic elements: (i) an experimental set up in which the mechanism (or a part of it) is running, (ii) an intervention technique, and (iii) a detection technique. Several different kinds of intervention strategies have been used historically. First are activation strategies in which the mechanism is activated and then some downstream effect is detected. A common intervention is to put in a tracer, such as a radioactive or high density element, activate the normal mechanism, and detect the tracer as it runs through the mechanism. Good tracers do not significantly alter the running of the activated mechanism; they merely allow observation of its workings. This experimental technique was used to prove the existence of messenger RNA, which was labeled differently from other cellular RNAs (Brenner, Jacob and Meselson 1961).

A second class of experimental strategies includes those that involve not merely activating but modifying the normal working of the mechanism. As Wimsatt (personal communication) and Glennan (1992) have stressed, a way to learn about a mechanism is to break a part of it and diagnose the failure. A fruitful way to learn about the action of a gene is to knock it out and note the effects in the organism. As with the notorious ablation experiments in physiology in the nineteenth century, the problem with gene knock out techniques in intact animals is that such a missing part may have multiple effects that are difficult to disentangle, given the often complex reactions between genotype and phenotype (Culp 1997). Another kind of modification strategy is what Craver (2002) called an “additive strategy,” which is similar to what Bechtel and Richardson (1993, 20) called “excitatory studies.” Some component in the mechanism is augmented or over-stimulated, then effects detected downstream. Craver's example was of engineered mice with more of a specific kind of neural receptor. Those mice were shown to learn faster and to retain what they had learned longer, thereby providing evidence for the role of such receptors in learning and memory. Craver suggested that using all three types of strategies (activation, ablation, addition) to study the same mechanism helps to strengthen the evidence provided by the results of the use of any one of them. Each helps to compensate for the weaknesses of the others to yield a robust conclusion (Wimsatt 1981).

4. Conclusion

An overview of the history of molecular biology revealed the original convergence of geneticists, physicists, and structural chemists on a common problem: the structure and function of the gene. Conceptual and methodological frameworks from each of these disciplinary strands united in the ultimate determination of the double helical structure of DNA (conceived of as an informational molecule) along with the mechanisms of gene replication, mutation, and expression. With this recent history in mind, philosophers of molecular biology have examined the key concepts of the field: mechanism, information, and gene. Moreover, molecular biology has provided cases for addressing more general issues in the philosophy of science, such as reduction and the integration of fields, explanation without laws of nature, the structure of biological theories, and strategies for experimentation. It has been argued that, given the importance of the discovery of macromolecular mechanisms throughout the history of molecular biology, a philosophical focus on mechanisms generates the clearest picture of its history, of its concepts, and of the cases from its past utilized by philosophers of science.


Other Internet Resources

Related Entries

causation: and manipulability | developmental biology | developmental biology: evolution and development | experimentation | gene | genetics | genetics: evolutionary | genetics: genotype/phenotype distinction | genetics: molecular genetics | heredity and heritability | human genome project | information: biological | innate/acquired distinction | laws of nature | life | natural selection: units and levels of | reduction and reductionism | replication | scientific explanation

Copyright © 2005
Lindley Darden
James Tabery

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

Stanford Encyclopedia of Philosophy