Thinking About the Theory of DesignA Report on the Symposium "Can There be a Scientific Theory of Intelligent Design?" held at the 48th Annual Meeting of the American Scientific Affiliation, Seattle Pacific University, Seattle, WA, August 9, 1993 Originally published at Origins Research 15, no. 2
Introduction: Why Return to a Disreputable Business?
Present theological discussions . . . ignore natural theology, and for contemporary linguistic philosophers the Argument from Design possesses no validity whatsoever and is logically and morally indefensible, although it may serve to heighten religious emotions.Meyrick H. Carre
“Physicotheology,” The Encyclopedia of Philosophy
One wonders what religious emotions the argument from design is supposed to heighten. Presumably they do not themselves make any strong claim to reasonableness. “All poets,” said W.H. Auden, “adore explosions, thunderstorms, tornadoes, conflagrations, ruins,” and “scenes of spectacular carnage.”1 And that is why, perhaps, poets are not generally at the reins of power. “The poetic imagination,” Auden noted ruefully, “is not at all a desirable quality in a statesman.” Analogously, we might observe, any emotion that can be heightened by a completely invalid and indefensible argument, is hardly one to be encouraged in the scientific enterprise — although, to Carre’s way of thinking, religion may have some use for the sentiment.
Is the theory of design (as I shall call it) really as bad as all that?2 Ask most scientists and philosophers, and the answer will be yes. Design as a scientific explanation is widely regarded as a dusty museum piece, a device that ceased to function in the nineteenth century. According to this view, when design collapsed around 1859 and the wreckage went into a display case, it was discovered that the theory had been supported all along only by various logical and theological mistakes. Therefore (it is claimed), if design has anything to teach us today, the lesson is strictly cautionary. Nowadays all reputable scientists and philosophers, whatever they may believe away from the lab or seminar room, are methodological naturalists.
That’s the usual story. Looked at closely, however, the usual story has some remarkable, or fabulous — meaning genuinely legendary — passages. It is a legend, for example, that Charles Darwin solved the problem of the origin of biological complexity. It is a legend that we have a good or even fair grasp on the origin of life, or that proper explanations refer only to so-called natural causes. To be sure, these and other legends of philosophical naturalism have a certain stature. One does not speak too harshly of them in polite company.
But neither should one accept them uncritically. Indeed, if we view the legends of philosophical naturalism with justifiable skepticism, the case against design looks far less formidable. But we can go further. While much work remains to be done to develop an empirically fruitful theory of design, it appears that none of the standing objections to design is unanswerable. As those objections are removed one by one, within the next decade it is quite likely that (in the words of design theorist Bill Dembski) “a theory of design can be formulated which will have significant advantages over its Darwin-inspired competitors.”
Now before the reader dismisses me for my naive optimism, let me acknowledge that a profound antipathy to design extends throughout the scientific and philosophical communities. When all the arguments for the theory have been weighed, many persons will still conclude that design is a bad idea assembled from noxious materials, which belongs where it has long resided — safely behind glass in the Hall of Great Scientific Failures.
Plainly, I do not think design is a bad idea; when weighed, arguments for design are, I believe, powerfully compelling. To be weighed, however, arguments for design must be heard. Thus, in August, the American Scientific Affiliation (ASA), an organization of Christians in the sciences, convened a symposium on the theory of design. Those unfamiliar with the ASA might suppose that such a symposium would be a congress of the already-converted, but that is not the case. Many of the strongest critics of design as a scientific explanation are leaders in the ASA, prominent in both its publications and lectures. These persons argue that they, perhaps better than most, are qualified to find the theory of design wanting. They are the intellectual offspring of believing scientists of past generations, who (it is said) found that design, when applied as a scientific explanation, fell to pieces in one’s hands. Thus the ASA was in many respects a less sympathetic audience for considering design than many secular audiences might have been.
No poll was taken after the symposium, but in discussions before and after the talks, the speakers (including myself) discovered great curiosity about the merits of design as an explanation and cautious encouragement for attempts to reframe the theory on new foundations. This report brings some of the points argued to a larger audience. Readers are strongly encouraged to contact the speakers (c/o Origins Research) with any criticisms or insights. Three of the speakers (Dembski, Meyer and Nelson) are engaged in a research project on the theory of design, funded by the Pascal Centre in Ontario, Canada, which will culminate in a book-length monograph on the subject.
In what follows I review the major points of each talk.
The first speaker, William Dembski, is a University of Chicago-trained mathematician (Ph.D. 1988), now completing a second Ph.D. in philosophy. Dembski approaches the theory of design first as a probabilist interested in explicating the structure of the “ordinary” design inferences that abound in our everyday life.
These inferences conform under analysis to what Dembski calls “a standard operating procedure,” illustrated by the flow chart in Figure 1. Consider an example3: John Smith died because his pacemaker malfunctioned. Smith’s death due to pacemaker malfunction is the event, E (the circle at the top of the flow chart), to be explained. Now suppose that on examining the pacemaker and Smith’s medical history, we discover that the pacemaker battery, although guaranteed to be fully functional for five years, was certain to run out after seven. Smith was negligent and forgot to replace the battery. Sure enough, it ran out, Smith’s heart fibrillated, and he died.
In this example we terminate (in explaining E) at the first decision node, HP. Given the physical principles governing pacemaker batteries and Smith’s negligence, his death from pacemaker malfunction was a high probability, or HP event, certain or virtually certain to occur. “And if we can explain by necessity,” said Dembski, “chance and design are automatically precluded.”
Suppose Smith wasn’t negligent, however. Suppose, in fact, that he replaced the battery just a year ago. Here we pass to the second decision node, IP, or to the class of intermediate probability events. These events, said Dembski, “are sufficiently probable as not to be a source of amazement.” Smith, it turns out, fell victim to a chance failure. Pacemaker manufacturers routinely test very large samples of batteries to ensure that the probability of failure for any given battery is extremely low. Nevertheless, while unlikely, it is still possible that before the expected five-year period, some batteries will by chance fail. Smith came up unlucky in this dreadful lottery. The pacemaker battery ran out after only one year, his heart fibrillated, and he died. Smith’s death is an IP event. Such chance events occur — they fill the newspaper — but we don’t attribute them to design.
Solving a Mystery
Consider another scenario, however. At Smith’s autopsy we can find nothing amiss with his pacemaker — except for some peculiar damage that we know can be caused only by exposure to intense microwave radiation. Our suspicions aroused, we begin an inquest and soon discover the following
- Jane Doe, Smith’s co-worker, rented microwave-transmitting equipment 10 days before Smith’s death.
- Smith signed a life insurance policy one week before his death, naming Jane Doe the sole beneficiary.
- There are scratches on Smith’s kitchen floor that correspond exactly to the dimensions of the microwave equipment.
- A pamphlet on pacemaker risks, including microwave exposure, is found in Jane Doe’s car.
- The microwave warning in the pamphlet is underlined.
- “Get John Smith next week” is written in the margin, next to the underlined warning, in Jane Doe’s handwriting.
- Witnesses saw Jane Doe leave Smith’s house shortly before he was discovered dead.
The police arrest Jane Doe immediately. She protests her innocence, but the district attorney jails her anyway and charges her with the pre-meditated murder of Smith. In the course of a trial the jury convicts her of the crime, and she is sentenced to life imprisonment.
Why do Jane Doe’s claims of innocence ring hollow? How can we be reasonably certain that she intended — designed, if you will — to kill Smith?
We have moved to the last decision node, SP/sp. Confronting us is an event of small probability (SP): the likelihood that Smith’s pacemaker failure was caused by anything but intense microwave radiation is vanishingly remote. This was determined at the autopsy before the inquest began: indeed, this finding caused us to suspect foul play, and thus to begin the inquest. Note, however, that this small probability event alone isn’t sufficient grounds for us to infer design (i.e., purposeful action) or to implicate Jane Doe. After all, Smith might have owned a poorly insulated microwave oven or tinkered unwisely with microwave equipment in his workshop.
The seven lines of evidence, however, do implicate Jane Doe. These, taken jointly, are what Dembski calls a specification. Specification (sp) is “an extra-probabilistic notion” (in Dembski’s words) that, when conjoined with small probability, provides robust grounds for inferring design. At the third decision node, when both SP (small probability) and sp (specification) are present, we may reasonably infer design as the cause of an event E.
It’s important, Dembski said, to see how specification and small probability necessarily work together to lead us to infer design. “Our naive intuition,” he noted, “is that SP events simply don’t happen and can be safely ignored.” But, he continued, that can’t be right: small probability events happen all the time. Flip a coin 1000 times, and you will have participated in an SP event with a probability of 1 in 10300.
But if a stranger approaches you on the street the next day and gives you a piece of paper with the exact sequence of coin flips (recorded as 1s and 0s) that you produced in the privacy of your study, you’re entitled to suspect some funny business. That stranger gave you a specification, and its match with the SP event you independently produced calls for an explanation. As Dembski put it,
If a probabilistic setup, like tossing a coin 1000 times, entails that some SP event will occur, then necessarily some extremely improbable event will occur. If, however, independently of the event we are able to specify it, then we have cause for surprise and alarm. It’s the specified SP events that cannot be attributed to chance.
In the Smith/Doe case the match is between the SP event itself (microwave damage to Smith’s pacemaker, causing his death) and the seven lines of evidence that jointly constitute a specification of the event. Note that we could have started our investigation with one or another aspect of the specification, e.g., the witnesses placing Jane Doe at the house, and only later — as evidence accumulated — examined the pacemaker for microwave damage. The temporal relation in our knowledge of the small probability event and the specification is not important. It’s the match between them that convinces us of design and eventually lands Jane Doe in prison for first degree murder.
This hypothetical example may seem somewhat fanciful. But, as Dembski pointed out, inferences with exactly this logical structure (i.e., following this “standard operating procedure”) are routinely employed not only by detectives but also by:
- Copyright and patent offices to identify theft of intellectual property
- Insurance companies to prevent themselves from being defrauded
- Skeptics to debunk the claims of parapsychology experiments
- Scientists to identify cases of data falsification
- The NASA SETI program to identify the presence of extraterrestrial intelligence
In other words, we derive and place great weight on design inferences all the time.
Solving Scientific Mysteries?
Still, Dembski asked, “Is what’s good enough for a court of law good enough for science?” Some worries come crowding in:
Even if one grants that the flow chart accurately describes the pre-theoretic practice of nonscientists in making design inferences, it’s not clear that this descriptive account should be in any way normative for the practice of science . . . doesn’t design . . . always leave us open to a God-of-the-gaps objection? And since design does not figure into contemporary scientific practice, why not rather dispense with it, and concentrate on the bread-and-butter explanations of science, to wit, chance and necessity?
Flow charts are all very well, but can we be certain that design inferences are valid? “It turns out,” said Dembski, “that a valid deductive argument does indeed undergird the standard operating procedure that people use to infer design.” The argument may be expressed as follows:
The Argument to Design
Premise 1: E is specified.
Premise 2: E has occurred.
Premise 3: E has occurred either by chance, necessity, or design.
Premise 4: E did not occur by necessity.
Premise 5: If E occurred by chance, then E has probability less than or equal to p.
Premise 6: Specified events of probability less than or equal to p do not occur by chance.
Conclusion: E occurred by design.
Suppose E, said Dembski, “is the event of opening a safe.” We’ve satisfied Premise 1: E is clearly specified, because “the safe is so constituted that only one of the many possible combinations opens it.” Premise 2 says simply that E has occurred. Premise 3, a trichotomy rule, says that E is due either to chance, necessity, or design. Since most of contemporary science restricts itself to chance and necessity, Premise 3 at worst introduces a superfluous element, namely design.
Premise 4 tells us the safe did not open by necessity, which is not controversial. “No known regularities of nature,” said Dembski, “account for the opening of safes with secure combination locks.” Premise 5 is likewise not controversial. “On any reasonable lock, the probability of hitting the right combination ‘by chance’ will be exceedingly small.”
Premise 6 is what Dembski calls The Law of Small Probability. This law, he argued, “is a basic regulative principle of statistics,” by which “we are entitled to eliminate chance as an explanation.”
Without this law, he stressed, we are powerless to make judgments in the face of uncertainty. In particular, statistical inference — which is indispensable to science — would be impossible without the Law of Small Probability.
In sum, when applied to the opening of a safe, these premises are unquestionably true. Since they entail the conclusion, the argument is valid, and therefore the conclusion itself must be true. The safe was opened by design. (Even if we are not absolutely certain of the premises, Dembski noted, we can still have confidence in the argument. “Entailment automatically gives us partial entailment as well.” Thus, if we hold only that the premises are very likely, the conclusion will itself be very likely.)
Which Premise Do Evolutionists Reject?
The argument to design hides no logical surprises. Yet in the literature of evolutionary biology, and the many volumes of the creation/evolution controversy, authors appeal regularly to probability considerations, exhibiting in addition an intuitive grasp of the notion of specification — but come to very different conclusions in the end. Richard Dawkins, for instance, is quite eloquent on the extraordinary specificity of organisms, and the vanishingly small likelihood that such specificity could arise by chance. Nevertheless, he denies that organisms are designed. For Dawkins, organisms are the products “of purposeless natural forces not guided by any intelligence.”
Which premise of the Argument to Design, Dembski asked, does Dawkins reject? Suppose that the event E to be explained is the occurrence of life here on earth (call this LIFE). Running through the premises individually, it is clear that neither 1 nor 2 is problematical for Dawkins (who has written in The Blind Watchmaker [BW] that organisms “have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone”). Nor is Premise 6 a problem. In BW Dawkins sets an upper bound to the “amount of luck” one is allowed to postulate for LIFE, “clearly restating,” Dembski pointed out, “the Law of Small Probability.” Lastly, although Dawkins would regard design as superfluous in the post-Darwinian scientific world, he plainly sees it as an empirical possibility. The trichotomy of Premise 3 is safe as well.
Whether Dawkins thinks LIFE is necessary (Premise 4) is, Dembski observed, less clear, because Dawkins “never assigns precise probabilities to events connected with the origin of life.” Still, it seems reasonable to think that Dawkins would accept Premise 4, given that his goal in BW is to show that the naturalistic occurrence of LIFE is probable enough, not that the probability of its occurrence approaches unity.
Premise 5 is the real focus of disagreement. “Dawkins explicitly rejects Premise 5,” said Dembski, by his “appeal to cumulative selection.” Dawkins sees selection over many generations as rendering probable what we would otherwise naively regard as improbable. It was Darwin’s genius, on this view, to provide the mechanism of selection as a naturalistic means for generating the complexity of living things.
As an empirical matter, however (Dembski continued), the status of Premise 5 “is still wide open.” It is far from clear that selective mechanisms will suffice to account for LIFE. How does one go about determining the probability of LIFE? Are the extant evolutionary scenarios really plausible? Some evolutionists, perhaps recognizing that probabilistic difficulties have carried off the plausibility of their scenarios, avail themselves of a cosmological buffet, where they fill their plates with hypothetical planets and universes in which LIFE may have arisen — thus making LIFE on this planet less surprising. As Dembksi put it,
Dawkins, to explain LIFE apart from a designer, not only gives himself all the time Darwin ever wanted, but also helps himself to all the conceivable planets there might be in the observable universe (note that these are planets he must posit, since no planets outside our solar system have been observed, nor is there currently any compelling theory of planetary formation which guarantees that the observable universe is populated with planets). Thus Barrow and Tipler, in order to justify their various anthropic principles, not only give themselves all the time and planets that Dawkins ever wanted, but also help themselves to a generous serving of universes (universes which are per definitionem causally inaccessible to us).
The truth of design, Dembski concluded, is an empirical question to be settled by looking to nature. Here the philosopher must pass the problem to the scientist, for the “laws and regularities” involved in Premise 4, and the “concrete probabilities” of Premise 5, are matters of fact to be determined by observation and experiment. But scientists should know that the theory of design leaves the hands of the philosopher marked, as Bertrand Russell said,
… by no formal logical defect; its premises are empirical, and its conclusion professes to be reached in accordance with the usual canons of empirical inference. The question whether it is to be accepted turns, therefore, not on general metaphysical questions, but on comparatively detailed considerations.4
Those “considerations” are empirical. From Dembski’s perspective, the empirical details speak unmistakably of design; but we should turn to the next speaker, Michael Behe, who addressed that topic.
Michael Behe (Ph.D., Biochemistry, University of Pennsylvania, 1978) is an Associate Professor in the Department of Chemistry at Lehigh University. Behe’s research focuses on the structure of nucleic acids — specifically, working under a grant from the National Institutes of Health, he examines the structural properties of certain tracts in eukaryotic DNA to determine their ability to interact with histone proteins to form nucleosomes, the basic structures of chromatin (the material found in chromosomes).
While Dembski’s talk concerned the logical structure of the design inference, Behe addressed the biological evidence that (when fitted into the design inference) motivates many to reject neo-Darwinism in favor of design. “It will be the burden of my talk,” he said, “to show that Darwinism has been unable to account for phenomena uncovered by the efforts of modern biochemistry during the second half of this century.”
Behe’s principal target was the theory of natural selection:
Natural selection, at some level, is the putative engine which pulls the neo-Darwinian train, and if natural selection stalls, then the whole Darwinian scheme grinds to a halt.
The data of biochemistry place grave obstacles in the path of neo-Darwinian explanation by natural selection, Behe stressed, because those data appear to indicate that, “at its most fundamental level,” life is “irreducibly complex.” In the face of such complexity, selection can effect nothing.
Organisms as Black Boxes
As a first step on the path to the notion of irreducible complexity, Behe began with Darwin’s discussion of the evolution of the eye. “How a nerve comes to be sensitive to light,” said Darwin in the Origin, “hardly concerns us more than how life itself first originated.”5 Darwin could lay such questions of mechanism aside, given the rudimentary biochemical science of his day, and concentrate his attention instead on finding a series of graded intermediates between the simplest and most complex eyes, arguing that selection sufficed to bridge the differences. The less that is known about how eyes (or other complex structures) actually work, Behe noted, the easier this strategy of evolutionary explanation. One simply strings together a series of black boxes.
“Unconstrained by knowledge of the mechanism,” he said, we find it easy “to imagine simple steps leading from nonfunction to function.” Calvin and Hobbes can easily imagine that the cardboard box into which they have climbed might take them into the air. Adults, on the other hand, know that a cardboard box is no more likely to fly than a pile of stones. Human powered flight occurs via complex mechanisms, and the “black box” of an airplane is black only to those (most of us) who know nothing about aeronautics and avionics. Analogously, Behe continued,
… when the exploratory vessel H.M.S. Cyclops dredged up some curious-looking mud from the sea bottom, no less a personage than Thomas Henry Huxley became convinced that it was Urschleim
… the progenitor of life itself, and Huxley named the mud Bathybius Haeckelii, after the eminent proponent of abiogenesis (German evolutionist Ernst Haeckel).
Haeckel and Huxley, seeing single-celled living things as “simple,” could regard Bathybius (an artifact caused by the alcohol used to preserve the dredged mud) as their evolutionary precursor. As the real complexity of even the simplest organisms became apparent, however, “belief in spontaneous generation faded away.”
The mistaken perception that single-celled organisms were “simple” was abetted by their status as black boxes. But just as modern biochemistry has “opened the black boxes of many biological systems,” said Behe, and elucidated at the molecular level such functions as vision, so our understanding of what it means to explain biologically should likewise shift, to take account of what we now know.
“Proteins,” said Behe, “are the machinery of living tissue that build the structures and carry out the chemical reactions necessary for life.” Much like the carpenter’s workshop that contains many different types of tools for various tasks, so “a typical cell contains thousands and thousands of different types of proteins,” to carry out the diversity of functions that sustain life. Assembled from amino acids in chains “anywhere from 50 to 1,000 amino acids” long, proteins fold up into “very precise” three-dimensional structures — and those structures determine their precise functions.
Protein structure and function are therefore as fundamentally linked as the structure and function of the tools in the carpenter’s shop. Like the tools, said Behe, “if the shapes of the proteins are significantly warped, then they fail to do their jobs.”
But how much “warping” can a protein tolerate? asked Behe. The three-dimensional structure of a protein is determined by its primary sequence. A change in that primary sequence, from a positively to a negatively charged amino acid, for instance, may affect the protein’s ability to fold properly, and hence its function. In a small protein of, say, 100 amino acid residues, there are 20 possible amino acids for each site. Thus, the probability of finding the right amino acid for the first site is 1 in 20. The probability of finding the correct two amino acids, in the first and second positions, is 1 in 20100, and so on. For the entire protein, the probability would be 1 in 20 to the one hundredth power (or 10130).
Yet it has long been known, Behe continued, that similar proteins from different species show differences in their primary amino acid sequences, while still folding to “closely similar structures.” It is possible, therefore, “for two different but similar amino acid sequences to be structurally and functionally equivalent.” Some amino acid changes appear to be tolerated. Is there a limit, however, to what changes are possible? And is there a way of answering that question directly (rather than only comparatively)?
At MIT, in the laboratory of Robert Sauer and his colleagues, just such a direct answer was sought for several viral proteins. Taking the genes for the viral proteins, Sauer’s group systematically deleted small pieces (corresponding to the instructions for three amino acids at a time), and inserted altered pieces back into the genes at the sites of the deletions. The altered genes, placed in bacteria, produced altered proteins. Since the bacteria quickly destroy proteins which fail to fold properly, Sauer’s group was able to isolate the altered proteins that were not destroyed. By sequencing those altered proteins, the biologists could observe which amino acids, in which positions, would produce a folded, functional protein.
What Sauer’s group found, said Behe, was that some sites tolerated a great diversity of possible amino acids (up to 15 out of 20 possibilities). Other sites tolerated much less diversity: only three or four amino acids would still yield a functional protein. Other sites, however, had “an absolute requirement for a particular amino acid” — no substitutions would work:
This means that if, say, a P does not appear at position 78 of a given protein, the protein will not fold regardless of the proximity of the rest of the sequence to the natural protein.
Gathering these experimental results over the whole length of the protein, one can readily calculate the likelihood of finding a folded protein by a random mutational search: about 1 in 10 to the 65th power. The number, Behe noted, is “virtually identical to results obtained earlier by theoretical calculations,” a confirmation that “greatly increases our confidence that a correct result has been obtained.”
As “complex and improbable as folded proteins are,” said Behe, “in many biological structures [they] are simply components of larger molecular machines.” In these larger structures, each protein component functions only “when all of the components have been assembled.”
Consider, Behe observed, the molecular machine of a cilium. Cilia are microscopic hair-like organelles found on the surfaces of many cells, that, by beating in synchrony, move fluid over the cell’s surface, or propel single cells through a fluid. The epithelial cells lining the human respiratory tract, for instance, each have 200 cilia moving synchronously to sweep mucus (bearing foreign particles) towards the throat, where it can be eliminated.
The major protein components of a cilium can be seen in the top-down cross section of Figure 2. The ciliary core, or axoneme, is a plasma membrane-coated bundle of fibers, which includes a ring of 9 double microtubules, surrounding 2 central single microtubules (the “9 + 2” array). Each outer doublet is composed of 13 filaments (A subfibers), fused to an assembly of 10 filaments (B subfibers). The individual filaments are themselves composed of two proteins called alpha- and beta-tubulin.
The 11 microtubules forming an axoneme in the “9 + 2” array are held together by three types of connectors (see Figure 2):
- The A subfibers are joined to the central microtubules by radial spokes, which terminate in a knoblike feature called a spoke head.
- Adjacent outer doublets are joined by linkers along the circumference, that consist in part of a highly elastic protein called nexin.
- The central microtubules are joined by a connecting bridge. Each type of connector is repeated along the length of the axoneme with its own characteristic periodicity.
Finally, every A subfiber has two arms — an inner arm and an outer arm — both containing the protein dynein.
So, asked Behe, “how does a cilium work?” Experiments have shown that ciliary motion is caused by the chemically-powered “walking” of the dynein arms along the adjacent B subfibers. (See the side view cross section of an axoneme segment in Figure 3.) Using ATP — the common carrier of cellular chemical energy — as their power source, the dynein arms on one microtubule “walk” up the neighboring B subfiber of a second microtubule, so that the two microtubules slide past each other.
This sliding motion is transformed into a bending motion by the nexin protein cross-links. The nexin cross-links keep the neighboring microtubules from sliding past each other by more than a short distance, thereby converting the dynein-induced sliding motion into a bending motion of the entire axoneme.
What happens to the bending function of the cilium, however, if its components are removed experimentally one by one? Remove the dynein arms, said Behe, and the cilium becomes rigid and inflexible. Its flexibility can be restored only when the dynein is replaced. Remove the nexin cross-links (by exposing them to the proteolytic enzyme trypsin), and the microtubule doublets slide past each other without stopping. The axoneme simply falls apart. Remove the alpha-and beta-tubulins, and there will be no filaments at all to bend. Removing one or another of the ciliary proteins, concluded Behe, “is like trying to design a pulley without a rope, or a lever without a fulcrum.” Each protein has its proper function only when all are present.6
The cilium, continued Behe, is an example of “irreducible complexity.”While plainly a complex structure, like an eye or a feather, the cilium possesses a complexity of a remarkable type:
It is also irreducible complexity. By this I mean that the components of the cilia are not themselves composed. They are single molecules. There are no more black boxes to invoke: the complexity is final.
The implications of irreducible complexity for the theory of natural selection are devastating. “Since the complexity of the cilium is irreducible,” said Behe, “it cannot have functional precursors.” The evidence at hand seems strongly to suggest that one either has a cilium, with all its necessary protein components in place — or one has nothing (or nothing functional, certainly). There seems to be no actual or even imaginable gradient of simpler molecular structures leading up to a cilium. Yet such a gradient is exactly what natural selection requires. Because the cilium does not have functional precursors, said Behe, “it cannot be produced by natural selection, which requires a continuum of function to work. Natural selection is powerless where there is no function to select.” And if the cilium cannot be produced by natural selection, he added, “then the cilium was designed.”
The Burden of Proof and the Study of Molecular Evolution
Darwin himself (Behe continued) set the standard of proof for advocates of the theory of design in the Origin:
If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find no such case.7
Yet, Behe argued, “Examples of irreducible complexity can be found on virtually every page of a biochemistry textbook.” Although the cilium is a striking example, because of its manifestly mechanical aspects, other such systems abound:
Other examples of irreducible complexity [include] aspects of blood clotting, closed circular DNA, electron transport, the bacterial flagellum, telomeres, photosynthesis, transcription regulation — virtually any biochemical system.
Where can one go, asked Behe, to find plausible step-by-step Darwinian scenarios for the origin of these complex systems? “A good place to look for an answer to that question,” he said, “is in the Journal of Molecular Evolution (JME). JME was started “specifically to deal with the topic of how evolution occurs on the molecular level.” It has high standards and is edited by prominent figures in the field of molecular evolution.
Yet, to the observer looking for Darwinian explanations, JME can only be regarded as a great disappointment. Behe tallied the results of the journal’s past 10 years of publication:
JME has published 886 papers. Of these, 95 discussed the chemical synthesis of molecules necessary for the origin of life, 44 proposed mathematical models to improve sequence analysis, 20 concerned the evolutionary implications of current structures, three discussed biochemical properties of current organisms, and 719 were analyses of protein or polynucleotide sequences. There were zero papers discussing models for intermediates in the development of complex biomolecular structures.
“If one looks at other journals or books,” he continued, “the story is the same.” Sequence comparisons abound, while models for the actual evolution of complex systems are hard to find.
“It is important to realize,” Behe said, in ending his talk, “that we are not inferring design from what we do not know, but from what we do know. We are not inferring design to account for a black box, but to account for an open box.” While we may be shocked to find open boxes speaking plainly of design, he said, “we must deal with our shock as best we can and go on.”
One can imagine that many listeners would find Behe’s talk compelling, but, in the end, intuitively unsatisfying. Such listeners might reason as follows: “To be sure, the biological world is replete with complex systems, for which we have no plausible or even sketchy naturalistic explanation. However, it is the business of science to provide such explanations, and not something else. Where a natural explanation is lacking, we place the problem on a shelf, marking it ‘unsolved,’ and if it gathers dust there, so be it. Science traffics, after all, in the empirical. What philosophers or theologians care to do is their business. But don’t ask scientists to infer design when that inference goes against all that their enterprise stands for!”
Behe would doubtless have his own sharp answer to this line of argument. But the next two speakers, Stephen Meyer of Whitworth College, and Paul Nelson (me), spoke directly to several variants or conceptual relatives of the argument. I turn next, then, to Steve Meyer.
Stephen Meyer (Ph.D., History and Philosophy of Science, Cambridge University, 1991) is Assistant Professor of Philosophy at Whitworth College in Spokane, Washington. Meyer’s Cambridge dissertation, on methodological questions in origin-of-life research, set him to thinking about a number of philosophical claims that have been used to adjudicate (actually, to dismiss) the intelligent design explanation for the origin of life.
Those claims hold that there are in principle sound methodological reasons for refusing to allow design as an explanation for the origin and diversity of life. However, Meyer contended, after critical examination the soundness of the reasons dissolves away. There are therefore no compelling philosophical grounds for excluding design; or, to put it another way, if judged by non-question begging criteria design is fully as explanatory as any other cause.
Yet, Meyer noted, most biologists are ill-disposed to see design as a genuine explanation. Their resistance is puzzling, however, in the light of (a) the persistence of teleological language in biology, and (b) the lack of progress in the reductionistic, naturalistic research program in the problem of the origin of life.
Consider the persistence of teleology. As Stanford historian of biology Timothy Lenoir has noted,
Teleological thinking has been steadfastly resisted by modern biology. And yet, in nearly every area of research biologists are hard pressed to find language that does not impute purposiveness to living forms.8
Biological objects, said Meyer, seem designed. And attempts to render the appearance of design illusory, by showing that it is a necessary consequence of natural laws acting at lower levels, have been less than successful. Resistance to design as an explanation cannot stem, for instance, from the achievements of the naturalistic thrust in origin-of-life research.
One of Meyer’s own dissertation advisors, an expert in the origin-of-life field, noted to Steve in 1989 (after returning from an international conference on the topic, in Prague) that the question “How did life arise naturalistically?” had in recent years acquired the potential to become a spawning ground for scientific cranks — so little was the field united by any one or even small number of generally accepted theories. Francis Crick’s agnosticism on the same subject is well-known:
An honest man, armed with all the knowledge available to us now, could only state that in some sense, the origin of life appears at the moment to be almost a miracle, so many are the conditions which would have had to be satisfied to get it going.9
Klaus Dose, another expert in the field, found little cheer in the fruits of the last few decades:
More than 30 years of experimentation on the origin of life in the fields of chemical and molecular evolution have led to a better perception of the immensity of the problem of the origin of life on Earth rather than to its solution. At present all discussions on principal theories and experiments in the field end either in stalemate or in a confession of ignorance.10
Other such sentiments have been widely expressed.
Why Exclude Design as an Explanation?
“Why then the expectation,” asked Meyer, “that we will find the answer to the question in naturalistic terms?” Why no consideration, whatsoever, for the possibility of a scientific theory of intelligent design?
“For most scientists,” he continued, “there is a perception that the ‘rules of science’ forbid those types of inferences — that is, inferences to a pre-existent intelligence.” Philosopher of science Nancy Murphey casts the issue in terms of what she thinks science itself seeks, namely, naturalistic explanations for all natural processes. “Christians and atheists alike,” Meyer quoted Murphey as arguing, “must pursue scientific questions in our era without invoking a creator.” Any reference to a creator ipso facto leaves the realm of science and enters that of metaphysics and theology.
“This is the answer to our question,” said Meyer. “Our era is one which proscribes the possibility, which outlaws the possibility of talking about creative intelligence as an explanatory entity within science.” But when exactly did this proscription arise? Nancy Murphey, noted Meyer, admitted that the naturalistic definition of science has dominated for only about 130 years. “It’s historically contingent,” he continued. “Most of biology prior to Darwin was in a creationist framework. Newton and Boyle, during the period of the Scientific Revolution, were quite fond of making design arguments, and not just on the basis of biology, but in optics and astronomy as well.”
The issue can be framed as the “categorical opposition” of the philosophical doctrine of methodological naturalism versus intelligent design. Methodological naturalism simply does not admit the possibility of intelligent design. One can accept the theory of intelligent design, of course, but not as a scientific proposition. “Or, as I’ve heard many times,” joked Meyer, “it might be true, but it can’t be science.“
But is methodological naturalism, asked Meyer, “purely an arbitrary convention?” If so, some people may no longer feel themselves bound by it. On the other hand, if good reasons ground methodological naturalism, “perhaps the ‘rules of science’ ought to continue as they are.”
Demarcation Arguments as ‘Litmus Tests’ for Scientific Standing
The ASA, Meyer noted, “tends to defer to methodological naturalism as a convention, on the basis of ‘what science does.’ Our secular colleagues, however,” he continued, “do attempt to justify the convention,” offering arguments for a purely naturalistic science. Within the philosophy of science proper, such arguments are generally called demarcation arguments. Demarcation arguments attempt to distinguish “true science” from all other human activities, in particular, from “pseudoscience, metaphysics, religion — and other bad things of that sort.” Although “that may sound facetious,” Meyer continued, “there’s an attempt for a distinction of epistemic value or epistemic warrant, on the basis of some philosophical litmus test.” For example, a truly scientific theory must be falsifiable, testable, and explain by reference to natural law. These are all examples of criteria that (putatively) distinguish true science from pseudoscience.
“Now the main rap on intelligent design,” said Meyer, “is first of all that it’s not naturalistic.” But methodological naturalism comes into play because design allegedly fails to meet the standard demarcation criteria: it’s not testable, doesn’t explain by reference to natural law, it’s unobservable — “you can’t put God in a test tube and study Him.” Demarcation arguments of this sort are regularly given as reasons for completely rejecting design as a possible explanation.
But within the philosophy of science, “demarcation arguments have totally failed.” This is a general judgment, said Meyer, that can be nicely illustrated by a particular case. One of the things that emerged from the 1981 Arkansas “equal time” (creation/evolution) trial was the ACLU’s skill at persuading the late Federal judge William Overton to accept its construal of the philosophy of science. The philosophy of science promulgated by the ACLU excluded young-earth creationism as being in principle nonscientific; creationism was trapped philosophically under the demarcation arguments offered by expert witness Michael Ruse.
Yet, in the wake of the trial, Ruse’s arguments were widely criticized in the philosophical community, Meyer noted, as “setting the philosophy of science back 50 years.” Ruse’s arguments were assembled from a “simplistic” logical positivism and the neo-positivism of Sir Karl Popper. The trial then became a contest of “Popper versus Popper,” as the creationists themselves assumed the soundness of Popperian neo-positivism. Ruse and the ACLU simply did a better job of persuading Overton that they, rather than the creationists, were the true Popperians.
Popper has been important in the development of the philosophy of science, allowed Meyer. But even after we give him his full due, we must face the charge that Popper’s philosophical theories fail to correspond with the way science actually functions. “Ruse, who ought to have known better in the early 80s,” said Meyer, “put forward a definition of science that neither evolutionary theory nor intelligent design could meet. Therefore there is already culturally something fishy going on with demarcation arguments.” Citing the philosophers Martin Eger and Larry Laudan, Meyer noted that demarcation arguments — although known to be deeply flawed by professional philosophers of science — continue to play a role in disputes like the creation/evolution controversy, or discussion of design.
The list of demarcation arguments against design offered in the scientific literature is long. Intelligent design held not to be scientific because (among other things):
- It does not explain by natural law
- It invokes unobservables
- It is not testable
- It does not make predictions
- It is not falsifiable
- It provides no mechanisms
“And on and on,” said Meyer, adding that he is currently looking for all such arguments, with a standing request to those interested that any demarcation argument not already mentioned be sent to him for his exhaustive catalogue.
When demarcation arguments are applied in origins research (comprising all types of evolutionary theory and all types of intelligent design theory), “one of two things obtain,” Meyer said.
Either the arguments are applied in so narrow a way as to exclude both naturalistic descent with modification and intelligent design, or they are applied in a more liberal, loose way, and both intelligent design and naturalistic descent must be included within science. Either they exclude both, if they are applied consistently, or they include both.
“These are not scientific arguments,” stressed Meyer, “although you usually hear scientists making them. These are not arguments about nature. These are arguments coming out of the philosophy of science — actually, very bad philosophy of science.”
Whether organisms evolved by natural means from a common ancestor, or were designed, is a factual question to be settled by the evidence, Meyer urged, not by a priori philosophical arguments. “In some ways what I’d like to do,” he added, “is to put my own profession, the philosophy of science, out of business,” at least where the question of the history of life is concerned. Legitimate factual questions are being “adjudicated by philosophical and methodological litmus tests, not by the evidence itself.”
The Criterion of Explanation in Terms of Natural Law
The litmus tests fail when looked at closely. Consider the criterion of explanation solely in terms of natural law, offered most prominently in the creation/evolution dispute by the philosopher and historian of biology Michael Ruse. On Ruse’s construal of natural law, said Meyer, “everything else follow from that — falsifiability, testability, prediction, repeatability — all these things follow from science’s reliance on natural law.” No theory of creation or intelligent design, however, explains by reference to natural law; hence, no theory of design can be scientific.
However, many areas of natural science do not explain solely by natural laws, but rather explain by referring to past events. In many of the historical sciences, as in forensic inquiry (e.g., criminal detective methods), the main explanatory work is done by a reconstructive scenario that, by postulating an event or series of events, attempts to link as many of the relevant facts or circumstances as possible. One begins with a set of facts and infers into the past to the event (or events) that would best explain those facts.
Darwin’s theory of common descent, for instance, “attempts to infer from things we can see,” said Meyer, “back to an unobservable causal history. … What’s doing the explanatory work for Darwin is the assertion that certain events — unobservable transitional intermediates, if you will, in the fossil record — would explain what we see in the present.” While natural laws may play a background role in our assumptions, historical explanations (in evolutionary theory, for instance) would not work without the key events they postulate.
There is a direct parallel here to the theory of design, which postulates the past action (i.e., event) of a mind acting on matter. But if explanations in terms of past events are inadmissible, design will be “ruled out of court,” said Meyer, “which is exactly what happened.” Yet there seems to be no good reason to exclude past events as explanations, especially since in practice “we explain by individual past events all the time.”
The Criterion of Observability
“Ruse’s argument,” concluded Meyer, “does not take into account the actual diversity of methods in science, or in the historical sciences in particular.” Other demarcation arguments fare no better. It is often claimed, for instance, that “God” cannot be an explanatory term because it describes an unobservable entity — and science deals only with observables. “But if observation is the hallmark of testability,” said Meyer, “an awful lot of things in science would be out of court.” In a symposium at SMU in March 1992, the molecular biologist Fred Grinnell argued that anything which cannot be measured, counted, or put in a test tube — in other words, directly observed — simply cannot be invoked in a scientific explanation. “I asked Grinnell,” said Meyer, “if he accepted the double-helical structure of DNA, or many of the other inferred entities in molecular biology.” Science is rife with unobservables. Physics, geology, and other sciences regularly employ them: “We infer from what we can see to what we can’t see.” Darwinism, for example, refers to biological events in an unobservable past.
“But if observability is a necessary condition for being scientific,” said Meyer, “then the Darwinian theory doesn’t qualify as science either.” Make the criterion of observability more liberal, and “you save the scientific status of Darwinism. But you also let design in as well.”
“Again and again when I examined these arguments,” said Meyer, “I found that they do not discriminate.” Applied rigorously and neutrally, the arguments failed to exclude design without also excluding Darwinian descent. Applied liberally, the arguments allowed both design and descent. One could of course simply stipulate that “we want only to look at naturalistic theories” (in which case demarcation arguments, if offered as objective philosophical criteria, are in fact so much window-dressing to flummox the unsophisticated). But if one assumes a position of genuine neutrality, demarcation arguments wield a philosophical scythe that indiscriminately mows down all lines of origins research — Darwinian and design-based.
A Good Reason to Include Design
There is, however, at least one good reason to include design as a proper explanation. Meyer’s own research in the philosophy of science was on the methods of the historical sciences. “There is more than one scientific method,” he said. “In fact there are at least two.” The inductive sciences (by which we might understand physics, chemistry, and the other primarily experimental sciences) are motivated by the question “How does nature normally operate?” The historical sciences (by which we might understand cosmology, geology, paleontology, evolutionary theory and biological systematics), on the other hand, are motivated primarily by the question “How did this system or object come to be?” These are logically distinct questions. In the latter case, when we ask how something came to be, we explain by invoking causal narratives or patterns of events — employing methods often termed “abductive” or “retroductive” — to find that set of events that best accounts for the features of what we observe in the present.
This is “detective-style reasoning,” said Meyer, and while such reasoning certainly employs natural laws (the bread-and-butter of the inductive or experimental sciences), those laws are insufficient tools for answering the questions posed in the historical sciences. The point has been appreciated well by evolutionary theorists defending their domain against the skepticism of their more experimentally-minded colleagues. In evolutionary theory, says Stephen Jay Gould, “we infer history from its results.”
This means that testing, or theory evaluation more generally, will also differ in important ways between the inductive and historical sciences. As Darwin often argued to his correspondents, the theory of common descent by natural selection had to be weighed comparatively, “vis-a-vis its competitors.” Explanations are judged by their relative power, and by their consistency with what we know from the present.
“Can a theory of design be formulated to meet these standards?” asked Meyer. Yes: the theory is attempting to answer a “What happened?” question, and does so by postulating the past action of an intelligent agent. “That’s a perfectly appropriate answer,” he said, “to a perfectly appropriate historical question.” Starting with distinctive features of living systems (as discussed by Michael Behe, for instance), design attempts to account for those features by referring them to a sufficient cause, namely, an intelligence. In every respect, argued Meyer, design as a theory is logically fully consonant with the types of answers, and methods of evaluation, common to the historical sciences.
The origin of life, said Meyer, is a scientific question that cannot be settled by philosophical gerrymandering or a priori definitions. “It is an empirical question that must be left fully open to whatever hypotheses come along. There’s nothing within the philosophy of science that justifies the exclusion of design.”
Meyer ended his talk by reiterating those features of living systems he regarded as explicable only by design: the conjunction of small probability and specification, the existence of coded information, and the complex functional interdependence of the components of organisms.
“The ‘logic boards’ of living things,” said Meyer, “are best explained by design.” Yet the doctrine of methodological naturalism stands in the way of our using the theory. We need to end “this hear-no-evil, see-no-design” outlook, urged Meyer, for the good of science.
I turn next to my talk — the last of the symposium.
For simplicity’s sake, I shall refer to myself in the first person, and ask the reader’s indulgence for the informality. I am a Ph.D. candidate in Philosophy at the University of Chicago, where I also studied evolutionary theory and systematics. My dissertation treats the conceptual relationship of the theory of common descent to theories about the causal structure of animal development.
It fell to me, by the request of the symposium organizers, to answer the question of how a theory of design might be practically applied by working biologists. But this question is, in a sense, premature. A well-articulated theory of design is not yet at hand. Indeed, it may be some time before such a theory is available for direct application in the day-to-day work of biologists.
But we can pose the question in a slightly different form. If a scientist who implicitly or explicitly accepted methodological naturalism were to lay that doctrine aside, and take the possibility of design seriously, How would his or her scientific practice differ? Would it differ?
This question is best answered by considering perhaps the most important standing objection to design, one that (I think) reflects a profound antipathy to the theory throughout modern science and the philosophy of science. I refer of course to the “God-of-the-gaps” objection, which is taken to show design’s theoretical bankruptcy and hence its uselessness for the real work of scientific explanation. If this objection cannot be turned back, design will never gain a hearing among its skeptics.
As it turns out, however, the God-of-the-gaps objection is entirely generic. That is, the objection is an epistemological difficulty that actually inflicts all scientific theories — and therefore counts specially against none.
The Explanatory Mosaic
We might put the God-of-the-gaps objection as follows. Design, its critics argue, suggests no research of its own, and provides no answers other than the vacuous. Rather, design is parasitic on what Elliott Sober calls “the incompleteness of science.”
Let me illustrate what I mean by “parasitic” with a simple visual metaphor. Figure 4 depicts what I shall call “The Explanatory Mosaic.”
Now this is a conceptual, not physical, space. Its boundaries are set by a question I suppose pretty much all of us want answered, namely, “How did the biological world come to be?” By “the biological world,” I mean everything one might imagine falling under that phrase: the first origin(s) of life, the origins of the major groups of plants and animals, the origins of the complex molecular structures presented by Behe, and so on — right down to the origin of human consciousness.
We should note a couple of things about the Explanatory Mosaic. First, many people are working on it, over a really vast theoretical and empirical area. Most of the people have no (or only very indirect) contact with each other. But, they are working on one or another aspect of the same problem (“How did the biological world come to be?”). Thus, if we’re going to have a coherent answer when the mosaic is complete — in other words, if we’re going to have a filled-out pattern that makes sense — we’re going to need a theory (a picture) to guide our work. It’s the theory that tells us what tiles go where in this vast space.
Secondly, work on the mosaic may proceed episodically and locally. In some locations, the tiles may fall rapidly and easily into place; while in other locations, the work may move slowly, or not at all. Indeed, tiles once thought firmly situated may have to be torn out, so that a once-unified part of the mosaic returns to a heap of unsolved puzzles. There’s no guarantee that local explanatory progress translates into progress elsewhere.
By the consensus of the scientific community, neo-Darwinism (understood as the common descent of all organisms from a single ancestor, by means of variation and selection) has nicely filled in many parts of the mosaic.Work is proceeding with this general picture, as a guiding schema, very much in mind. We needn’t haggle over what percentage of the area has been filled in according to the neo-Darwinian picture (we needn’t take the metaphor that seriously), but most observers think some progress has been made. We can represent those areas by the crosshatched parts of Figure 4.
Problems With Filling in the Evolutionary Picture
Yet, unsolved problems remain. While evolutionists have a general idea of what the final mosaic will be, that is, the common descent of all organisms with a central role for natural selection as the process theory, they don’t know much about the details. And, over the past two decades there has been a rising level of discontent within neo-Darwinism about the sufficiency or even necessity of it. Consider some recent expressions of unhappiness. From geneticist Martin Kreitman:
Many of the simplest and most com- mon patterns of morphological evolution still elude satisfactory explanation. In a recent conversation, Richard Lewontin pointed out to me that many morphological characters are essentially invariant within species — the scutellar bristle number and position in Drosophila, for example — but are manifestly different between species. The “whole problem of evolution,” according to him, is to explain this seeming contradiction. Why, he wanted to know, do characters like that exist?11
From developmental biologist Wallace Arthur:
One can argue that there is no direct evidence for a Darwinian origin of a body plan — black Biston betularia [peppered moths] certainly do not constitute one! Thus in the end we have to admit that we do not really know how body plans originate.12
From geneticists Bernard John and George Miklos:
As unpalatable as it may seem to many biologists, certain aspects of conventional evolutionary theory have become stalled, and it is futile to pretend that continuing study along the well-worn, mathematically oriented neo-Darwinian pathways will provide significant insights into key evolutionary phenomena.13
And so on. Other such worries can escalate into disenchantment even with the theory of common descent, but that would take us too far afield. The point is that large numbers of evolutionary theorists think neo-Darwinism has ceased to work as the picture (if you will) that guides their labor.
Sure, the theory handles certain phenomena well, but on the really big questions, such as the origins of the major groups, or of complex structures, there is a diminishing conviction that neo-Darwinism is up to the task. And as we take in the whole mosaic, of course, to include such problems as the origin of life, neo-Darwinism is plainly insufficient (or inapplicable).
Do Design Explanations Depend on the Incompleteness of Science?
So what does the design theorist do — according to the God-of-the-gaps objection — when he comes upon the incomplete Explanatory Mosaic? He surveys the open areas, i.e., the unsolved problems, and wherever they exist lays down a quick, easy, uniform veneer of design. (See Figure 5, area filled by vertical lines.) Do you have an unsolved problem? It’s no problem: God did it. The theory of design must be true. Look at all these unsolved problems and open areas the theory so readily and completely fills!
Here’s how Elliott Sober puts the design theorist’s move:
This argument begins with the fact that there are many features of the living world that evolutionary theory cannot now explain. The origin of life, for instance, remains an active area of scientific research. … Science is shot through with ignorance. Doesn’t this provide an opportunity for creationist explanations to be pressed home?14
But, Sober continues, this doesn’t follow:
Our current ignorance is no evidence for the truth of any explanation, creationist or otherwise. The fact that we currently do not understand various facts about life is no reason to think that God has intervened in life’s history.15
Why not? The answer can be seen by looking at Figure 5. Suppose the crosshatched area in the lower left-hand corner of the mosaic were to expand a bit. Research reveals new evolutionary mechanisms, and thus resolves a long-standing problem in, say, the adaptive modification of mammalian limbs. As that area expands, the area occupied by the design theory necessarily shrinks.
But if design’s area can shrink, then nothing was there in the first place. If we imagine that, in Figure 5, the design tiles (the vertical lines) were genuinely occupying explanatory space, then evolutionary theory couldn’t expand into the space occupied by design. That would be geometrically impossible, the plane (within the boundaries set by the question) having been completely tiled.
But (on the objection we’re considering) the area occupied by design necessarily shrinks as evolution expands. Thus, the explanatory content of design is governed entirely by whatever problems happen to be unsolved by evolutionary theory. As Sober puts it,
Creationists try to parlay the current incompleteness of scientific knowledge into points in their favor. … We can expect creationists in the future to choose a different array of phenomena since many of the problems that currently puzzle science probably will be sorted out in the future.16
Here we might press the question of why (on this objection) design has no content, and necessarily retreats before the advance of evolution. The usual answer says that the principal cause of design theory — an omnipotent, invisible deity — is utterly inscrutable. Moreover, an inscrutable cause can be invoked at will, wherever one pleases:
In its ultimate extension, [the design hypothesis] represents what might be termed the ‘Will of Allah’ point of view: whatever happens is God’s choice. Putting it the other way around, God’s choice is whatever happens, and this means that a divinity can always be invoked without the possibility of challenge.17
Scientists do not think they now have all the answers. That is why they continue to do research. On the other hand, creationists have at hand an all-purpose explanation for any observation you please. The origin of life, the distribution of modes of reproduction, and everything else can be explained by a four-word hypothesis: “It was God’s will.”18
In general, the God-of-the-gaps objection holds that unsolved problems are only that: unsolved problems. Design is parasitic because it rides like a conceptual remora on the back of the prevailing naturalistic theory, drawing whatever content it has from the shortcomings of that theory. As those shortcomings are resolved, design is correspondingly diminished. Sober nicely expresses this intuition, which motivates the God-of-the-gaps objection. “The past successes of scientific explanation,” he writes, “suggest that was is now inexplicable may eventually be brought within the scope of scientific understanding.”19
This objection has long troubled me, and I have here tried to express it as forcefully as possible. How should it be answered?
What are the “Gaps” in the Phrase “God of the Gaps”?
We might start by looking at the very phrase “God-of-the-gaps.” To what does the word “gap” here refer? Is the gap in the world, among the phenomena? Of course not. The gap is given by — is relative to — some theory about the phenomena. The gap exists in our heads (as it were), because a theory has posed a puzzle for which we do not have an answer: yet the theory also tells us to expect to find an answer.20
But if the theory we are presupposing is false, then the gap we want to fill, or the answer we are seeking, may not exist. Let’s return to the metaphor of the mosaic. If we presuppose the truth of naturalistic evolution, then we will set to work filling the mosaic according to that picture. It’s the theory, after all, that tells us where to lay the tiles, or to try to lay the tiles.
If the tiles won’t go into place, however, we have some grounds for thinking that the theory guiding everyone’s work might not be true. If the difficulties persist, and a pattern of unsolved problems emerges, we might wonder whether another picture wouldn’t do a better job of guiding research. That picture would dissolve some or perhaps most of the “gaps” (research problems) of the older picture, by rendering them ill-framed, or nonexistent.
Consider the puzzle of the naturalistic origin of life. This “gap” arises for evolution because according to that theory the parts must precede the whole: that is, the nonliving constituents of organisms must be temporally and causally prior to organisms (first came methane, carbon monoxide, ammonia, and water; from these, amino acids; from these, proteins, and so on). Furthermore, only “natural” causes may be employed in scientific explanation.
But there is nothing in the question, “How did living things come to be?” — the question accepted as valid by both evolutionists and non-evolutionists as setting the boundaries of the mosaic, but not determining its internal patterns — that dictates the necessity or truth of either evolutionary assumption. Indeed, with biological systems, the whole may well precede its parts, and in the absence of some sound philosophical justification the prohibition against design as a cause is arbitrary and question-begging.
Looking for a Theory of Design with Positive Content
If we take a design-based theory as our guiding picture, however, the gap created by the evolutionary puzzle “How did life arise naturalistically?” wouldn’t be so much filled by design as dissolved by it. “How did life arise naturalistically?” as Kuhn writes of the problems posed by theories generally, is:
… a puzzle for whose very existence the validity of the paradigm must be assumed. Failure to achieve a solution discredits only the scientist and not the theory. Here … the proverb applies: ‘It is a poor carpenter who blames his tools.’21
It is not a poor carpenter who blames the blueprint, however, when he finds that it dictates an impossible structure. It is fully rational for a scientist who recognizes the intractability of his research puzzle to abandon it, if he discovers that the puzzle presupposes something false. (Indeed it would be irrational to do otherwise.) It is likewise fully rational for a scientist to find another puzzle to solve, one posed by a theory grounded on different principles.
To do so, of course, we carpenters (or scientific mosaic-builders) must have a theory of design that projectsits own patterns into the space established by the question, “How did living things come to be?” It would then not be evolutionary theory telling us what to expect observationally and theoretically, but design (see Figure 6). Some of the so-called “unsolved problems” of evolutionary theory might then become design-based predictions, perhaps framed as proscriptions, that is, as propositions of the form “event or phenomenon x will not occur.”
Consider an example. In 1983, the creationist molecular biologist Siegfried Scherer published a paper in the Journal of Theoretical Biology on the evolution of light-driven cyclic electron transport, the energy-producing mechanism of bacterial photosynthesis. He estimated the number of basic functional states required to evolve “a microorganism with a light-driven cyclic electron transport process” from “an anaerobic heterotrophic microorganism lacking membrane-associated electron transport” — a critical step in the early evolution of life.
Scherer estimated that no fewer than five new proteins would be needed to move from “fermentative bacteria, perhaps similar to Clostridium” to fully photosynthetic bacteria. Taking known mutation rates and the estimated numbers of mutations required for the five new proteins, Scherer calculated the probability that the necessary basic functional states could have evolved. Assuming a mutation rate of 10-4, “the range of probabilities estimated,” he concluded, “is between 10-40 and 10-104 … In other words, in 109 years an FeS-protein may sometimes appear, whereas photopigments and quinones” — other proteins required for the evolutionary transition — “are never expected.”22
It’s a difficult problem for evolutionary theory, to say the least. How, within the available time, and by known mechanisms, did the necessary bacterial proteins arise? “From the data presented,” concluded Scherer, “the evolution of cyclic photosynthetic electron transport is an unsolved problem in theoretical biology. On the basis of present understanding, no solution can be expected.”23
That’s how the problem looks if we presuppose naturalistic evolution. The tiles won’t go into place. From the perspective of design, however, this research problem would very likely never arise. Complex systems with interdependent components, exhibiting specification and small probability, are — according to the theory of design — the products of an intelligent cause. Expressing the same point proscriptively, we might say that the laws of physics and chemistry are causally insufficient to generate the specified complexity of organisms, and that appeals to chance mechanisms are precluded by the Law of Small Probability (premise 6 of Dembski’s argument, given above). The design theorist would take something like this to be a general law. Starting from that law, the design theorist would predict that the naturalistic formation of complex biological systems is probabilistically beyond the realm of the possible.
Such a prediction is, of course, vulnerable to refutation. Indeed, assuming that exact bounds can be put on theoretical notions such as “specified complexity,” and that probability estimates can be made rigorously, it should be possible to run experiments set up precisely to test design-based predictions. In so doing, the design theorist may be surprised to discover that “unassisted” nature is capable of more innovation than he suspected. On the other hand, his predictions may hold true: the specified complexity of organisms may be causally irreducible.
In short, the design theorist could stumble, or succeed, in any number of ways. But this is true for all scientists. We always know less than we need to know, and our theories are never what they should be. These difficulties, of course, are entirely in keeping with the nature of empirical inquiry. Philosophers of science have a name for this epistemological difficulty, one of the first they learn about in their training: the problem of induction.
The problem of induction follows from the necessary finitude of our experience. All of our claims about the world may be overturned as the compass of our experience increases. There are only white swans, we say, until a black swan crosses the lawn. But a black swan might cross the lawn (so to speak) of any general empirical claim. Design theorists face no special difficulties in this regard. It might turn out to be the case that any given design-based prediction does not obtain, but this is possible for any nontrivial prediction entailed by any theory.
Once we see that “gaps” are theory-dependent, and that design does not propose to fill the gaps left unsolved by naturalistic evolution, but rather to project its own pattern of explanation and research problems, all that remains of the formidable God-of-the-gaps objection is the problem of induction.
And if the “God-of-the-gaps” objection indeed falls under the heading of the problem of induction, it poses no special challenge to a theory of design. The finitude of experience is a difficulty common to scientific inquiry and in fact to human knowledge generally. We could always be wrong! Let philosophical worries of that sort frighten you, however (any further than they should, which isn’t much), and David Hume will keep you from climbing out of bed in the morning. The floor might not be there as you step down into your slippers.
Looking over the unfinished mosaic of naturalistic evolution, the design theorist wants to exclaim, “Let me show you why those problems continue to go unsolved: the picture guiding your work is false.” But he finds his advice is unwanted, and his understanding of science is rejected as “inserting religion where it doesn’t belong.”
In response the design theorist can point to the philosophical shoddiness and opportunism of narrowly naturalistic construals of scientific explanation. The game has been rigged against design. But another response appeals to the skeptical bystander who wonders whether design might not have some real explanatory power. This bystander doesn’t care if the theory requires an intelligent cause. Such causes are known to exist (human intelligence) or hypothesized to exist (extraterrestrial intelligence, divine intelligence). This bystander — student, scientist, philosopher — wants to see if design might not be true.
And that’s an interesting question, well worth asking and trying to answer. The task is to find a good theory of design and to test it.
- W.H. Auden, “The Poet and the City,” in the collection The Dyer’s Hand.
- In this essay by “theory of design” I mean a theory that seeks to explain the origin of biological structures. Theories of cosmological or physical design have of course been formulated; I do not treat them here.
- I have elaborated this example (with Dembski’s approval) beyond that actually presented at the symposium, to illustrate the explanatory roles of Dembski’s key notions.
- Bertrand Russell, A History of Western Philosophy, New York: Simon and Schuster, 1945, p. 589.
- Charles Darwin, On the Origin of Species, 1st ed., Cambridge, Mass.: Harvard University Press, 1964; p. 187.
- Additional evidence supporting this view has been obtained from “genetic dissection” experiments. “The favorite object of such studies is the unicellular Chlamydomonas reinhardii, which was two flagella that propel it through the water. Many nonmotile mutants have been isolated: in some, the mechanism of flagellar assembly is defective and the flagella are absent or rudimentary; in others, flagella are present but immobile. In the latter case, the defect is likely to be in a protein component of the motor mechanism. Various structural abnormalities are apparent in electron micrographs of such mutant flagella. … In one class of mutants the only detectable change is the loss of the dynein arms. In a second class the mutants lack only the radial spokes, while in a third they lack both the central pair of microtubules and the inner sheath. In all three classes the isolated membrane-free axonemes fail to move in the presence of ATP” (Bruce Alberts, Dennis Bray, Julian Lewis, Martin Raff, Keith Roberts, James D. Watson, Molecular Biology of the Cell, New York: Garland Publishing, 1983; pp. 567-568).
- Darwin, Origin, p. 189.
- Timothy Lenoir, The Strategy of Life, Chicago: University of Chicago Press, 1989; p. ix.
- Francis Crick, Life Itself, New York: Simon & Schuster, 1981, p. 88.
- Klaus Dose, “The Origin of Life: More Questions Than Answers,” Interdisciplinary Science Reviews 13 (1988): 348-56; p. 348.
- Martin Kreitman, “Will Molecular Biology Solve Evolution?” in Molds, Molecules, and Evolution, ed. Peter R. Grant and Henry S. Horn, Princeton: Princeton University Press, 1992; p. 134.
- Wallace Arthur, Theories of Life, New York: Penguin Books, 1987; p. 180.
- Bernard John and George Miklos, The Eukaryote Genome in Development and Evolution, London: Allen & Unwin, 1988; p. 335.
- Elliott Sober, “Creationism,” chapter Two of Philosophy of Biology, Boulder: Westview Press, 1993; p. 54.
- Ibid., p. 55.
- Steven Stanley, The New Evolutionary Timetable, New York: Basic Books, 1981, p. 174.
- Sober, “Creationism,” p. 55.
- Hence, Sober’s phrase “the incompleteness of science” is really too general. The “incompleteness” of which he speaks is always theory-relative, and thus should be considered in the light of whatever theory is at issue. At any time, “science” (our body of knowledge about the natural world) will be incomplete — necessarily incomplete, one might say. However, from one period to the next the areas thought incomplete will differ greatly. The “research problems” of the mesmerist or phlogiston chemist simply do not arise for the modern psychologist or physical chemist.
- Thomas S. Kuhn, The Structure of Scientific Revolutions, Chicago: University of Chicago Press, 1970; p. 80.
- Siegfried Scherer, “Basic Functional States in the Evolution of Light-driven Cyclic Electron Transport,” Journal of Theoretical Biology 104 (1983): 289-299; p. 296.
- Ibid., p. 298.