Does Evolution Even Have A Mechanism?

Address to the American Museum of Natural History
William A. Dembski
American Museum of Natural History
April 23, 2002
Print ArticleTalk delivered at the American Museum of Natural History, 23 April 2002 at a discussion titled “Evolution or Intelligent Design?” The participants included ID proponents William A. Dembski and Michael J. Behe as well as evolutionists Kenneth R. Miller and Robert T. Pennock. Eugenie C. Scott moderated the discussion. An introduction was given by National History Editor, Richard Milner. For coverage of this debate, see Scott Stevens' article in The Cleveland Plains Dealer.

Evolutionary biology teaches that all biological complexity is the result of material mechanisms. These include principally the Darwinian mechanism of natural selection and random variation, but also include other mechanisms (symbiosis, gene transfer, genetic drift, the action of regulatory genes in development, self-organizational processes, etc.). These mechanisms are just that: mindless material mechanisms that do what they do irrespective of intelligence. To be sure, mechanisms can be programmed by an intelligence. But any such intelligent programming of evolutionary mechanisms is not properly part of evolutionary biology.

Intelligent design, by contrast, teaches that biological complexity is not exclusively the result of material mechanisms but also requires intelligence, where the intelligence in question is not reducible to such mechanisms. The central issue, therefore, is not the relatedness of all organisms, or what typically is called common descent. Indeed, intelligent design is perfectly compatible with common descent. Rather, the central issue is how biological complexity emerged and whether intelligence played a pivotal role in its emergence.

Suppose, therefore, for the sake of argument that intelligence--one irreducible to material mechanisms--actually did play a decisive role in the emergence of life’s complexity and diversity. How could we know it? To answer this question, let’s run a thought experiment. Imagine that Alice is sending Bob encrypted messages over a communication channel and that Eve is eavesdropping. For simplicity let’s assume all the signals are bit strings. How could Eve know that Alice is not merely sending Bob random coin flips but meaningful messages?

To answer this question, Eve will require two things: First, the bit strings sent across the communication channel need to be reasonably long--in other words, they need to be complex. If not, chance can readily account for them. Just as there’s no way to reconstruct a piece of music given just one note, so there is no way to preclude chance for a bit string that consists of only a few bits. For instance, there are only eight strings consisting of three bits, and chance readily accounts for any of them.

There’s a second requirement for Eve to know that Alice is not sending Bob random gibberish: Eve needs to observe a suitable pattern in the signal Alice sends Bob. Even if the signal is complex, it may exhibit no pattern characteristic of intelligence. Flip a coin enough times, and you’ll observe a complex sequence of coin tosses. But that sequence will exhibit no pattern characteristic of intelligence. For cryptanalysts like Eve, observing a pattern suitable for identifying intelligence amounts to finding a cryptographic key that deciphers the message. Patterns suitable for identifying intelligence I call specifications.

In sum, Eve requires both complexity and specification to infer intelligence in the signals Alice is sending to Bob. This combination of complexity and specification, or specified complexity as I call it, is the basis for design inferences across numerous special sciences, including archaeology, cryptography, forensics, and the Search for Extraterrestrial Intelligence (SETI). I detail this in my book The Design Inference, a peer-reviewed statistical monograph that appeared with Cambridge University Press in 1998.

So, what’s all the fuss about specified complexity? The actual term specified complexity is not original with me. It first occurs in the origin-of-life literature, where Leslie Orgel used it to describe what he regards as the essence of life. That was thirty years ago. More recently, in 1999, surveying the state of origin-of-life research, Paul Davies remarked: “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity” (The Fifth Miracle, p. 112).

Orgel and Davies used specified complexity loosely. In my own research I’ve formalized it as a statistical criterion for identifying the effects of intelligence. For identifying the effects of animal, human, and extraterrestrial intelligence the criterion works just fine. Yet when anyone attempts to apply the criterion to biological systems, all hell breaks loose. Let’s consider why.

Evolutionary biologists claim to have demonstrated that design is superfluous for understanding biological complexity. The only way to actually demonstrate this, however, is to exhibit material mechanisms that account for the various forms of biological complexity out there. Now, if for every instance of biological complexity some mechanism could readily be produced that accounts for it, intelligent design would drop out of scientific discussion. Occam’s razor, by proscribing superfluous causes, would in this instance finish off intelligent design quite nicely.

But that hasn’t happened. Why not? The reason is that there are plenty of complex biological systems for which no biologist has a clue how they emerged. I’m not talking about handwaving just-so stories. Biologists have plenty of those. I’m talking about detailed testable accounts of how such systems could have emerged. To see what’s at stake, consider how biologists propose to explain the emergence of the bacterial flagellum, a molecular machine that has become the mascot of the intelligent design movement.

Howard Berg at Harvard calls the bacterial flagellum the most efficient machine in the universe. The flagellum is a nano-engineered outboard rotary motor on the backs of certain bacteria. It spins at tens of thousands of rpm, can change direction in a quarter turn, and propels a bacterium through its watery environment. According to evolutionary biology it had to emerge via some material mechanism. Fine, but how?

The usual story is that the flagellum is composed of parts that previously were targeted for different uses and that natural selection then co-opted to form a flagellum. This seems reasonable until we try to fill in the details. The only well-documented examples that we have of successful co-optation come from human engineering. For instance, an electrical engineer might co-opt components from a microwave oven, a radio, and a computer screen to form a working television. But in that case, we have an intelligent agent who knows all about electrical gadgets and about televisions in particular.

But natural selection doesn’t know a thing about bacterial flagella. So how is natural selection going to take extant protein parts and co-opt them to form a flagellum? The problem is that natural selection can only select for pre-existing function. It can, for instance, select for larger finch beaks when the available nuts are harder to open. Here the finch beak is already in place and natural selection merely enhances its present functionality. Natural selection might even adapt a pre-existing structure to a new function; for example, it might start with finch beaks adapted to opening nuts and end with beaks adapted to eating insects.

But for co-optation to result in a structure like the bacterial flagellum, we are not talking about enhancing the function of an existing structure or reassigning an existing structure to a different function, but reassigning multiple structures previously targeted for different functions to a novel structure exhibiting a novel function. The bacterial flagellum requires around fifty proteins for its assembly and structure. All these proteins are necessary in the sense that lacking any of them, a working flagellum does not result.

The only way for natural selection to form such a structure by co-optation, then, is for natural selection gradually to enfold existing protein parts into evolving structures whose functions co-evolve with the structures. We might, for instance, imagine a five-part mousetrap consisting of a platform, spring, hammer, holding bar, and catch evolving as follows: It starts as a doorstop (thus consisting merely of the platform), then evolves into a tie-clip (by attaching the spring and hammer to the platform), and finally becomes a full mousetrap (by also including the holding bar and catch).

Ken Miller finds such scenarios not only completely plausible but also deeply relevant to biology (in fact, he regularly sports a modified mousetrap cum tie-clip). Intelligent design proponents, by contrast, regard such scenarios as rubbish. Here’s why. First, in such scenarios the hand of human design and intention meddles everywhere. Evolutionary biologists assure us that eventually they will discover just how the evolutionary process can take the right and needed steps without the meddling hand of design.

But all such assurances presuppose that intelligence is dispensable in explaining biological complexity. The only evidence we have of successful co-optation, however, comes from engineering and confirms that intelligence is indispensable in explaining complex structures like the mousetrap and by implication the flagellum. Intelligence is known to have the causal power to produce such structures. We’re still waiting for the promised material mechanisms.

The other reason design theorists are less than impressed with co-optation concerns an inherent limitation of the Darwinian mechanism. The whole point of the Darwinian selection mechanism is that you can get from anywhere in configuration space to anywhere else provided you can take small steps. How small? Small enough that they are reasonably probable. But what guarantee do you have that a sequence of baby-steps connects any two points in configuration space?

Richard Dawkins compares the emergence of biological complexity to climbing a mountain--Mount Improbable, as he calls it. According to him, Mount Improbable always has a gradual serpentine path leading to the top that can be traversed in baby-steps. But that’s hardly an empirical claim. Indeed, the claim is entirely gratuitous. It might be a fact about nature that Mount Improbable is sheer on all sides and getting to the top from the bottom via baby-steps is effectively impossible. A gap like that would reside in nature herself and not in our knowledge of nature (it would not, in other words, constitute a god-of-the-gaps).

The problem is worse yet. For the Darwinian selection mechanism to connect point A to point B in configuration space, it is not enough that there merely exist a sequence of baby-steps connecting the two. In addition, each baby-step needs in some sense to be “successful.” In biological terms, each step requires an increase in fitness as measured in terms of survival and reproduction. Natural selection, after all, is the motive force behind each baby-step, and selection only selects what is advantageous to the organism. Thus, for the Darwinian mechanism to connect two organisms, there must be a sequence of successful baby-steps connecting the two.

Again, it is not enough merely to presuppose this--it must be demonstrated. For instance, it is not enough to point out that some genes for the bacterial flagellum are the same as those for a type III secretory system (a type of pump) and then handwave that one was co-opted from the other. Anybody can arrange complex systems in a series. But such series do nothing to establish whether the end evolved in a Darwinian fashion from the beginning unless the probability of each step in the series can be quantified, the probability at each step turns out to be reasonably large, and each step constitutes an advantage to the organism (in particular, viability of the whole organism must at all times be preserved).

Convinced that the Darwinian mechanism must be capable of doing such evolutionary design work, evolutionary biologists rarely ask whether such a sequence of successful baby-steps even exists; much less do they attempt to quantify the probabilities involved. I attempt that in chapter 5 of my most recent book No Free Lunch. There I lay out techniques for assessing the probabilistic hurdles that the Darwinian mechanism faces in trying to account for complex biological structures like the bacterial flagellum. The probabilities I calculate--and I try to be conservative--are horrendous and render natural selection entirely implausible as a mechanism for generating the flagellum and structures like it.

If I’m right and the probabilities really are horrendous, then the bacterial flagellum exhibits specified complexity. Furthermore, if specified complexity is a reliable marker of intelligent agency, then systems like the bacterial flagellum bespeak intelligent design and are not solely the effect of material mechanisms.

It’s here that critics of intelligent design raise the argument-from-ignorance objection. For something to exhibit specified complexity entails that no known material mechanism operating in known ways is able to account for it. But that leaves unknown material mechanisms. It also leaves known material mechanisms operating in unknown ways. Isn’t arguing for design on the basis of specified complexity therefore merely an argument from ignorance?

Two comments to this objection: First, the great promise of Darwinian and other naturalistic accounts of evolution was precisely to show how known material mechanisms operating in known ways could produce all of biological complexity. So at the very least, specified complexity is showing that problems claimed to be solved by naturalistic means have not been solved. Second, the argument from ignorance objection could in principle be raised for any design inference that employs specified complexity, including those where humans are implicated in constructing artifacts. An unknown material mechanism might explain the origin of the Mona Lisa in the Louvre, or the Louvre itself, or Stonehenge, or how two students wrote exactly the same essay. But no one is looking for such mechanisms. It would be madness even to try. Intelligent design caused these objects to exist, and we know that because of their specified complexity.

Specified complexity, by being defined relative to known material mechanisms operating in known ways, might always be defeated by showing that some relevant mechanism was omitted. That’s always a possibility (though as with the plagiarism example and with many other cases, we don’t take it seriously). As William James put it, there are live possibilities and then again there are bare possibilities. There are many design inferences which, to question or doubt, require invoking a bare possibility. Such bare possibilities, if realized, would defeat specified complexity. But defeat specified complexity in what way? Not by rendering the concept incoherent but by dissolving it.

In fact, that is how Darwinists, complexity theorists, and anyone intent on defeating specified complexity as a marker of intelligence usually attempts it, namely, by showing that it dissolves once we have a better understanding of the underlying material mechanisms that render the object in question reasonably probable. By contrast, design theorists argue that specified complexity in biology is real: that any attempt to palliate the complexities or improbabilities by invoking as yet unknown mechanisms or known mechanisms operating in unknown ways is destined to fail. This can in some cases be argued convincingly, as with Michael Behe’s irreducibly complex biochemical machines and with biological structures whose geometry allows complete freedom in possible arrangements of parts.

Consider, for instance, a configuration space comprising all possible character sequences from a fixed alphabet (such spaces model not only written texts but also polymers like DNA, RNA, and proteins). Configuration spaces like this are perfectly homogeneous, with one character string geometrically interchangeable with the next. The geometry therefore precludes any underlying mechanisms from distinguishing or preferring some character strings over others. Not material mechanisms but external semantic information (in the case of written texts) or functional information (in the case of polymers) is needed to generate specified complexity in these instances. To argue that this semantic or functional information reduces to material mechanisms is like arguing that Scrabble pieces have inherent in them preferential ways they like to be sequenced. They don’t. Michael Polanyi offered such arguments for biological design in the 1960s.

In summary, evolutionary biology contends that material mechanisms are capable of accounting for all of biological complexity. Yet for biological systems that exhibit specified complexity, these mechanisms provide no explanation of how they were produced. Moreover, in contexts where the causal history is independently verifiable, specified complexity is reliably correlated with intelligence. At a minimum, biology should therefore allow the possibility of design in cases of biological specified complexity. But that’s not the case.

Evolutionary biology allows only one line of criticism, namely, to show that a complex specified biological structure could not have evolved via any material mechanism. In other words, so long as some unknown material mechanism might have evolved the structure in question, intelligent design is proscribed. This renders evolutionary theory immune to disconfirmation in principle, because the universe of unknown material mechanisms can never be exhausted. Furthermore, the evolutionist has no burden of evidence. Instead, the burden of evidence is shifted entirely to the evolution skeptic. And what is required of the skeptic? The skeptic must prove nothing less than a universal negative. That is not how science is supposed to work.

Science is supposed to pursue the full range of possible explanations. Evolutionary biology, by limiting itself to material mechanisms, has settled in advance which biological explanations are true apart from any consideration of empirical evidence. This is arm-chair philosophy. Intelligent design may not be correct. But the only way we could discover that is by admitting design as a real possibility, not ruling it out a priori. Darwin himself agreed. In the Origin of Species he wrote: “A fair result can be obtained only by fully stating and balancing the facts and arguments on both sides of each question.”