Intelligent Design Proponents Toil More than the Critics: A Response to Wesley Elsberry and Jeffrey Shallit

Original Article

Version 1.1

Casey, you did not write a response to the substance of our essay. That would have required reading comprehension on your part. What you wrote was an orgy of strawman gouging and delusional codswallop.

Wesley Elsberry, in his apparently only comments in response to this extensive rebuttal to his paper


A few years back Dr. Wesley Elsberry and Dr. Jeffrey Shallit co-wrote an article, “Information Theory, Evolutionary Computation, and Dembski’s ‘Complex Specified Information’,” in response to William Dembski’s 2001 book No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence.

No Free Lunch was something of a sequel to Dembski’s first major book, The Design Inference: Eliminating Chance through Small Probabilities (Cambridge University Press, 1998), but Dembski’s work has come a long way since that time. In this regard — and it’s not Elsberry or Shallit’s fault per se, this is just how things go — their critique is now somewhat out-dated. The computational research of the Dembski and Robert Marks at the Evolutionary Informatics Lab (as well as the Biologic Institute) has preempted many lines of objection they raised. For example, Elsberry and Shallit charged that “intelligent design advocates have produced many popular books, but essentially no scientific research.” It’s doubtful that charge was accurate when they first posted their article, but no serious critic could make that charge in 2010.

Dembski has written brief replies to Shallit (see here and here) where he indicated that many of their criticisms are now outdated and that he did not see the need to write a further, detailed response. I too found many of Elsberry and Shallit’s critiques to be misguided and reflected misapplications of Dembski’s ideas. Nonetheless, I occasionally get emails from people interested in a rebuttal to their article, so I felt some written response is necessary. This article intends to be that non-exhaustive response, touching upon some of the errors in Elsberry and Shallit’s critique of Dembski.

I. Understanding How Intelligent Agents Operate Yields a Positive Case for Design

Elsberry and Shallit allege that in Dembski’s work, “Intelligence, and intelligent agents, are treated as unfathomable mysteries beyond human comprehension.” They further charge, “Since the decisions of intelligent agents are supposedly not reducible to chance and natural law, it follows that these decisions are irrational, in the sense of being inexplicable through rational processes.” These charges are easily rebutted. Dembski’s work, and the work of many other ID proponents, shows that we can understand how intelligent agents operate and use that knowledge to make reliable predictions about the type of information we will find if an intelligent agent was at work. This allows us to reliably detect such an agent’s prior behavior. In fact, one of the quotes from Dembski that Elsberry and Shallit cite early in their critique makes this point clear:

(1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi)

This passage from Dembski shows that we can understand the process of intelligent design and how intelligent agents produce designs. Yet Elsberry and Shallit write in response to Dembski here: “But this is not a positive account of what constitutes design.” I strongly disagree with Elsberry and Shallit’s claim.

Dembski’s statement above asserts that the actions intelligent agents are comprehensible, and alludes to the fact that we know they act with forethought, will, and intentionality to solve some complex problem. It’s this precise ability that gives intelligent agents their unique ability to generate high levels of specified complexity. Stephen Meyer explains how our ability to study and understand the actions of intelligent agents allows us to construct a positive case for design:

As Berlinski (2000) has argued, genetic algorithms need something akin to a “forward looking memory” in order to succeed. Yet such foresighted selection has no analogue in nature. In biology, where differential survival depends upon maintaining function, selection cannot occur before new functional sequences arise. Natural selection lacks foresight. What natural selection lacks, intelligent selection — purposive or goal-directed design — provides. Rational agents can arrange both matter and symbols with distant goals in mind. In using language, the human mind routinely “finds” or generates highly improbable linguistic sequences to convey an intended or preconceived idea. … Analysis of the problem of the origin of biological information, therefore, exposes a deficiency in the causal powers of natural selection that corresponds precisely to powers that agents are uniquely known to possess. Intelligent agents have foresight. Such agents can select functional goals before they exist. They can devise or select material means to accomplish those ends from among an array of possibilities and then actualize those goals in accord with a preconceived design plan or set of functional requirements. Rational agents can constrain combinatorial space with distant outcomes in mind. The causal powers that natural selection lacks — almost by definition — are associated with the attributes of consciousness and rationality — with purposive intelligence. Thus, by invoking design to explain the origin of new biological information, contemporary design theorists are not positing an arbitrary explanatory element unmotivated by a consideration of the evidence. Instead, they are positing an entity possessing precisely the attributes and causal powers that the phenomenon in question requires as a condition of its production and explanation.

Stephen C. Meyer, “The origin of biological information and the higher taxonomic categories,” Proceedings of the Biological Society of Washington, Vol. 117(2):213-239 (2004).

Likewise, in Dembski’s latest book (with Jonathan Witt), Intelligent Design Uncensored he explains how our understanding of the way intelligent agents act allows us to formulate a positive argument for design:

We know from experience that intelligent agents build intricate machines that need all their parts to function, things like mousetraps and motors. And we know how they do it — by looking to a future goal and then purposefully assembling a set of parts until they’re a working whole. Intelligent agents, in fact, are the one and only type of thing we have ever seen doing this sort of thing from scratch. In other words, our common experience provides positive evidence of only one kind of cause able to assemble such machines. It’s not electricity. It’s not magnetism. It’s not natural selection working on random variation. It’s not any purely mindless process. It’s intelligence — the one and only.

William Dembski and Jonathan Witt, Intelligent Design Uncensored: An Easy-to-Understand Guide to the Controversy, pp. 20-21 (InterVarsity Press, 2010).

In the same book they show the comprehensibility of intelligent action by boiling the process of intelligent design research into a 3-steps, which provides a positive determination that intelligence is the best explanatory cause for observed data:

When we attribute intelligent design to complex biological machines that need all of their parts to work, we’re doing what historical scientists do generally. Think of it as a three-step process: (1) locate a type of cause active in the present that routinely produces the thing in question; (2) make a thorough search to determine if it is the only known cause of this type of thing; and (3) if it is, offer it as the best explanation for the thing in question.

William Dembski and Jonathan Witt, Intelligent Design Uncensored: An Easy-to-Understand Guide to the Controversy, p. 53 (InterVarsity Press, 2010).

Thus, Meyer and Dembski agree that this sort of a positive account of design shows exactly why ID is a compelling explanation for the origin of features that require a goal-directed origin.

Dembski uses the same logic when he writes in The Design Inference that “[t]he principle characteristic of intelligent agency is directed contingency, or what we call choice.” (The Design Inference, p. 62.) Likewise, in The Design of Life, the very definition Dembski (and Jonathan Wells) give for intelligence helps us to understand how intelligent agents operate:

A type of cause, process, or principle that is able to find, select, adapt, and implement the means necessary to effectively bring about ends (or achieve goals or realize purposes). Because intelligence is about matching means to ends, it is inherently teleological. (Dembski & Wells, The Design of Life, glossary.)

Dembski and Wells thus write that ID “holds that intelligence is fully capable of … interacting with and influencing the material world, and thereby guiding it into certain physical states to the exclusion of others … intelligence can itself be a source of biological novelties.” (The Design of Life, p. 109.) They make further observations about how intelligent agents operate:

We know from experience that when people design things (such as a car engine), they begin with a basic concept and adapt it to different ends. As much as possible, designers piggyback on existing patterns and concepts instead of starting from scratch. Our experience of how human intelligence works therefore provides insight into how a designing intelligence responsible for life might have worked. (The Design of Life, p. 140).

In Dembski’s book Understanding Intelligent Design, he co-writes that “Functional information is regularly observed to result from an intelligent mind” (p. 128) and he writes, “When intelligent agents act, they leave behind a characteristic trademark or signature known as specified complexity. By recognizing this feature, we can distinguish intelligently designed objects from those that are the result of unintelligent natural forces.” (p. 102)

It seems clear that not only has Dembski treated intelligent agents as comprehensible causes which can be studied and understood, but he has used the analysis of such agents to formulate positive arguments for design. Thus, because we observe that intelligent agents have the unique ability to employ will, forethought, and intentionality in order to achieve some pre-determined end-goal, when we find structures in nature that require such a goal-directed or teleological process, we can infer the prior action of an intelligence.

Intelligent agents seem to work in ways that we can understand, which allows us to make predictions about the types of unlikely patterns they will produce — informational patterns which require a goal-directed, purposive process to originate. It’s difficult to accept Elsberry and Shallit’s claim that the abilities of an intelligent agent are “unfathomable mysteries” when ID theorists are constantly elucidating the capabilities of intelligent agents and using them to make predictions about what we should find if an object was designed.

What’s most amazing is that Elsberry and Shallit challenge Dembski to publish a rigorous definition of CSI. But Dembski’s book The Design Inference explores the nature of specified complexity, and in No Free Lunch Dembski gives this definition of specified complexity: “The coincidence of conceptual and physical information where the conceptual information is both identifiable independently of the physical information and also complex.” (p. 141)

II. Elsberry and Shallit Find False Positives by Misapplying Specified Complexity

One of the most severe problems with Elsberry and Shallit’s response to Dembski’s book No Free Lunch is their misapplication of specified complexity and their repeated and incorrect claims that Dembski’s methods would yield “false positives.” Elsberry and Shallit state that “other natural processes produce ‘design’ in the sense of pattern” and thus they would claim that Dembski’s methods of design detection yield “false positives.” Like a home pregnancy exam that incorrectly reads “pregnant,” a false positive occurs when some experimental or theoretical test says the answer is “yes” when the answer should have been “no.”

Elsberry and Shallit raise various examples where they feel Dembski’s methods would detect design in natural objects, but in fact those objects were not intelligently designed. The problem for many of Elsberry and Shallit’s examples is that Dembski doesn’t just infer design on the basis of any old pattern, but patterns that are complex — specified complexity. It seems that only by misapplying Dembski’s ideas about specified complexity can they make a claim that his methods don’t work.

For example, in one example, Elsberry and Shallit cite pulsars as an example of a pattern that could give a false positive for design under Dembski’s explanatory filter. They write:

Pulsars (rapidly pulsating extraterrestrial radio sources) were discovered by Jocelyn Bell in 1967. She observed a long series of pulses of period 1.337 seconds. In at least one case the signal was tracked for 30 consecutive minutes, which would represent approximately 1340 pulses. Like the SETI sequence, this sequence was viewed as improbable (hence “complex”) and specified (see Section 8), hence presumably it would constitute complex specified information and trigger a design inference. Yet spinning neutron stars, and not design, are the current explanation for pulsars.

This reasoning misapplies specified complexity. Pulsars do produce regular repeating patterns, but those patterns aren’t complex. Elsberry and Shallit are simply wrong to claim that the patterns we observe from pulsars are unlikely. Consider this description of pulsar patterns from NASA:

Pulsars pulse because the rotation of the neutron star causes the radiation generated within the magnetic field to sweep in and out of our line of sight with a regular period.

Thus, the “regular” patterns from pulsars are easily explained by natural law. The same holds for the “regular patterns formed by ice crystals,” which Elsberry and Shallit claim “would constitute CSI.” In Understanding Intelligent Design, Dembski writes (with Sean McDowell) why ice crystals are easily explained by natural law:

we cannot infer something was designed merely by eliminating chance. Star-shaped ice crystals, which form on cold windows, are a case in point. They form as a matter of physical necessity simply by virtue of the properties of water. An ice crystal has an ordered structure, but it does not warrant a design inference — at least not in the same way as a Mickey Mouse landscape or Mount Rushmore. A designer may have designed the properties of water to bring about ice crystals, but such a design would be embedded in the laws of nature. The design we’re interested in is more like engineering design, which looks to particular structures rather than general processes.

(William Dembski and Sean McDowell, Understanding Intelligent Design: Everything You Need to Know in Plain Language, pp. 105-106 (Harvest House, 2008).)

Likewise, the repeating pattern of atomic packing in a salt crystal is a pattern, but not a complex one. The laws of atomic packing and chemical bonding easily determine the structure of a salt crystal. In none of these cases would we infer design.

Even non-ID theorist Leslie Orgel has recognized that living organisms have a type of pattern-based complexity that is quite different from the repeating patterns of salt crystals (or one might add, from pulsars or ice crystals):

“[L]iving organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple, well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures which are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity.”

Leslie E. Orgel, The Origins of Life: Molecules and Natural Selection, pg. 189 (Chapman & Hall, 1973).

Pulsars and crystals are specified, but their simple law-based complexity does not generate a form of complexity that triggers the design inference.

Elsberry and Shallit also claim that basalt columns could trigger a false positive for design under Dembski’s methods, because they might appear to be intelligently designed stone columns, well known from ancient ruins. They write:

Let us consider the construction of tall pillars made of hard material, such as stone columns. We wish to argue that all such pillars are due to intelligent agents. Now in every case where a pillar appears and the underlying causal story is known with the certainty Dembski demands, these pillars were constructed by intelligent agents (humans). Using Dembskian induction, we would conclude that intelligent agents must be responsible for all such pillars, including the sand pipes at Kodachrome Basin State Park in Utah and the basalt columns at the Giant’s Causeway in Ireland. But this conclusion can only be retained by ruling out the circumstantial evidence in favor of accepted geological explanations for these features (ancient geysers and split volcanic flows, respectively; see [43]).

But an ID theorist who properly applies ID thinking would never obtain a false positive from basalt columns. For one, it’s not true that “in every case where a [basalt column] appears and the underlying causal story is known with the certainty Dembski demands, these [columns] were constructed by intelligent agents (humans).” We can observe basalt lava flows in the present day cooing and then observe the columnar shapes caused by cooling basaltic lava. So we have a known, viable, natural explanation for these structures. Such columnar basalts are common in Eastern Washington State and I’ve observed them many times; they do NOT resemble human-designed pillars. Here are just a few reasons why basalt columns aren’t like “stone pillars” known from ancient human ruins:

  • First, basalt columns are generally hexagonal or pentagonal prisms and human designed pillars from ruins are generally round.
  • Second, basalt columns are never found standing alone but instead are found embedded in an outcrop representing a cooled lava flow. In contrast, human designed pillars from ruins can be found standing alone or resting on any type of rock, and need not be embedded in a lava flow.
  • Third, basalt columns are made of basalt and found near basalt lava — the tell-tale sign. Human designed pillars can be found anywhere and are commonly made of marble or other carvable rock — not basalt.

It seems unlikely that a careful design theorist investigator would confuse basalt columns with a designed structure. The same goes for Elsberry and Shallit’s examples of rainbows, fungus “fairy rings,” and circular and polygonal cracks in rocks due to freezing. None of these entail false positives for Dembski’s methods. They are in every case easily explained via observed natural causes.

To be more precise, part of the reason why Elsberry and Shallit are uncovering alleged “false positives” is that they are constantly inferring design prematurely without carefully asking whether natural causes exist for that structure. Their examples infer design too quickly. They are not willing to do hard investigation to really determine if ID is in fact the best explanation, or if some natural explanation is superior. This flaw in their methodology will become increasingly clear as further examples are discussed here.

Dembski, on the other hand, doesn’t infer intelligent design lightly. His point, consistently, is that you should do hard work to determine the best explanation, remaining open to other explanatory causes. Elsberry and Shallit seem unwilling to do that work and think about whether natural causes can explain this data.

Intelligent Design is Falsifiable But Elsberry and Shallit Misunderstand Dembski’s Response Regarding the Oklo Natural Reactor

Another example where Elsberry and Shallit misapply Dembski’s work pertains to the “Oklo Natural Reactor.” The Oklo Natural Reactor is a uranium ore deposit in sedimentary rock in a mine in Gabon, Africa. It shows concentrations of various isotopes which indicate that self-sustained nuclear reactions took place over many thousands of years. A “nuclear reactor” on earth is usually thought of as being intelligently designed, not a natural object. But this one appears to be natural. In their article, Elsberry and Shallit suggest that this example might trigger a false positive for detecting intelligent design under Dembski’s methods. They claim that Dembski’s response regarding the Oklo Natural Reactor shows that design is unfalsifiable — but they misquote Dembski. Dembski explains the proper analysis of the reactor in a passage they don’t quote:

although the conditions of these natural nuclear reactors had to satisfy were highly specific, they were not that improbable. For instance, with respect to the proportions of uranium 235 and 238, given their differing decay rates, there was bound to come a time when the proportions would be ideal for a nuclear chain reaction. (No Free Lunch, p. 27)

Thus, when we consider all of the available probabilistic resources, the Oklo Natural Reactor triggers the “necessity node” in Dembski’s explanatory filter, meaning that it can be explained as the result of lawlike natural processes. Dembski argues that we have sufficient probabilistic resources to explain this feature — in other words, given enough probabilistic resources, this turns out to be a natural feature that is not highly unlikely.

In fact, it turns out that geologists have perfectly good explanations for how the Oklo Natural Reactor got started naturally. As explained in “The Natural Nuclear Reactor at Oklo: A Comparison With Modern Nuclear Reactors,” by Andrew Karam (then-professor at Rochester Institute of Technology):

At the time that the Gabon reactor went critical, the abundance of 235U was 3%, similar to that in current commercial nuclear reactors. The approximate shape of the reactor zones is that of a compact mass of uranium oxide surrounded by porous rocks, which were presumably hydraulically connected to surface or ground water, allowing moderation and reflection of the neutrons produced by spontaneous fission or cosmic ray induced fission.

The relatively large size and spherical shape of the uranium bearing region reduced buckling. When the surrounding porous rocks were saturated with water, the subsequent moderation and reflection allowed the reactor to achieve criticality. It is likely that criticality was not continuous. As the reactor power increased, the water moderator would heat, reducing its density and its effectiveness as a moderator and reflector. This process, known as a negative temperature coefficient, helps to control power during transient conditions in manmade nuclear reactors.

If sufficient power was produced the reactor would have lost moderation and reflection, resulting in a shutdown. Until short lived fission product poisons decayed away, even immediate resaturation with water may not have resulted in restarting the nuclear chain reactions. Therefore, the reactor probably did not operate continuously, but at discrete intervals with the operating time determined by the power output, water supply pressure and temperature, and water flow through the reactor. The duration of the shutdown periods would have been determined by the buildup of fission product poisons and the length of time required to replace the moderator (if it boiled away) or to cool it sufficiently to resume the reaction.

In fact, a recent paper (Meshik et al, 2004) looked at the operation of the Oklo reactor. The authors deduced that the reactor likely operated cyclically, operating for a half hour until accumulated heat boiled away the water, then shutting down for up to 2.5 hours until the rocks cooled sufficiently to allow water saturation again.

(Andrew Karam, “The Natural Nuclear Reactor at Oklo: A Comparison With Modern Nuclear Reactors“)

Significantly, one chemist predicted the possibility of natural nuclear reactors on earth before one was discovered. (See P. K. Kuroda (1956). In fact, Karam goes on to predict the discovery of more natural reactors based upon the fact that it “seems likely that other natural reactors were operational in the past”:

It also seems likely that other natural reactors were operational in the past. Other parts of the world have large, high assay deposits of uranium mineralization in sedimentary strata, so the circumstances which led to the formation of the Gabon reactor may not have been unique. It seems safe to assume that this process may have taken place throughout the history of the earth. Indeed, there are hints that a natural reactor was operational in the Colorado Plateau, based on a slight depletion of 235U in ore specimens there (Cowan, 1976). It may be that our knowledge of natural nuclear reactors is limited primarily by our explorations to date.

It seems that Dembski rightly rejects design for the Oklo Natural Reactor.

An Improper Charge of Unfalsifiability

When assessing Dembski’s analysis of the Oklo Natural Reactor, Elsberry and Shallit charge that Dembski’s approach to detecting design is “unfalsifiable.” They quote Dembski saying: “suppose the Oklo reactors ended up satisfying this criterion after all. Would this vitiate the complexity-specification criterion? Not at all. At worst it would indicate that certain naturally occurring events or objects that we initially expected to involve no design actually do involve design.” They then reply: “In other words, Dembski’s claims are unfalsifiable. We find this good evidence that Dembski’s case for intelligent design is not a scientific one.”

This is an inaccurate criticism which confuses “unfalsified” with “unfalsifiable.” After all, as we have seen, Dembski concludes that under our current knowledge the Oklo Natural Reactor was not designed. This shows that in principle, design is falsfiable. Now all scientific theories must be held subject to future data. So the fact that, hypothetically, the data regarding the Oklo Natural Reactor could be different and trigger a design inference does not imply that Dembski’s methods are unfalsifiable. Unfalsified does not equal unfalsifiable.

A proper response to Dembski’s argument is to recognize that ID is falsifiable but then to note that design theorists should work hard and study nature carefully before inferring design. While one might initially think the Oklo Natural Reactor is both specified and complex (unlikely), a closer investigation shows it isn’t all that improbable and is in fact easily accounted for by natural causes. Dembski did that hard investigation and determined it was not designed.

With hard work and careful investigation, it’s eminently possible to distinguish features like human designed tall pillars from basalt columns and many of these natural features from designed features. In his book Understanding Intelligent Design Dembski explains why hard work is important when studying intelligent design:

The prospect that further knowledge may overturn a design inference is a risk the Explanatory Filter gladly accepts. In fact, it is a risk common to all scientific investigation, not just intelligent design. Scientific knowledge is fallible — it may be wrong, and it may be shown to be wrong in light of further empirical evidence and theoretical insight. If the mere possibility of being wrong were enough to destroy the Filter, we would have to throw out all of science.

(William Dembski and Sean McDowell, Understanding Intelligent Design: Everything You Need to Know in Plain Language, p. 113 (Harvest House, 2008).)

The fact that ID can be disproven doesn’t mean we should throw it out entirely. In fact, ID’s falsifiability is a strength.

III: Whose Views About Intelligent Design Are Really Treated as Falsifiable?

In the previous section, we saw that Elsberry and Shallit prematurely allowed their own preconceptions to dictate what ought to be designed. They claimed that ID is “unfalsifiable,” and then claimed an overall naturalistic paradigm of origins is treated as falsifiable, stating, “Contrary to Dembski’s assertions, design is not arbitrarily ruled out as an element of scientific investigation.”

This is an odd claim because there are many examples of ID critics trying to dismiss ID by defining it as outside of science. These critics arbitrarily refuse to even consider ID. In fact, Dr. Elsberry’s former employer, the NCSE, convinced Judge Jones to do just that in the Dover ruling. It’s difficult for me to accept Elsberry and Shallit’s claim given that evolutionists have said things like these:

“[I]f a living cell were to be made in the laboratory, it would not prove that nature followed the same pathway billions of years ago. But it is the job of science to provide plausible natural explanations for natural phenomena.” (Science and Creationism, A View from the National Academy of Sciences, 2nd Edition (National Academy Press, 1999).)

“The statements of science must invoke only natural things and processes. … The theory of evolution is one of these explanations.” (Teaching About Evolution and the Nature of Science, National Academy Press, p. 42. (National Academy Press, 1998).)

“Even if all the data point to an intelligent designer, such an hypothesis is excluded from science because it is not naturalistic.” (Scott C. Todd, “A view from Kansas on that evolution debate,” Nature, Vol. 401:423 (Sept. 30, 1999).)

It’s hard to take Elsberry and Shallit seriously when they claim that many evolutionary scientists don’t reject ID as unscientific by definition. And it is an arbitrary exclusion since ID makes testable and falsifiable predictions and uses the scientific method to make its claims. As discussed below, the ways that intelligent agents conceive of their designs allows us to make positive predictions about what we should find if an intelligent agent was involved.

IV. Kaput Criticisms of William Dembski’s Nicholas Caputo Design Detection Example

In some of his writings, William Dembski has given as an example of design detection the fraudulent election of Nicholas Caputo. Caputo, a Democrat, was a county clerk in New Jersey who was responsible for determining the order of candidates on the election ballot. Over the course of 41 elections, in 40 of them a Democrat was placed first on the ballot, before Republicans. Dembski uses this example to show the methods by which we detect design, first ruling out chance, then law or law-plus-chance explanations, and finally argues that design is the best explanation for Caputo’s suspiciously ordered ballots.

Elsberry and Shallit’s critique takes aim at Dembski’s handling of the Caputo example. They claim that Dembski is too quick to infer design, giving a laundry list of potential explanations of Caputo’s actions that could explain the data apart from design. Since Dembski inferred design, the implication is that Dembski’s methods can lead to false positives in design detection. I’ll list Elsberry and Shallit’s alternative explanations and show why they don’t refute Dembski’s methods — but rather confirm them:

Elsberry and Shallit: “(a) Caputo really had no choice in the assignment, since a mobster held a gun to his head on all but one occasion. (On that one occasion the mobster was out of town.)”

Rather than showing that Dembski’s work leads to a false positive, explanation (a) IS in fact design! One might view it as front-loaded design where Caputo wasn’t the agent who planned the scheme but just the one who carried it out. Nonetheless, it’s still design since the intelligent designer — the mobster — is the one orchestrating the event.

Elsberry and Shallit: “(b) Caputo, although he appears capable of making choices, is actually the victim of a severe brain disease that renders him incapable of writing the word “Republican.” On one occasion his disease was in remission.”

Explanation (b) is a testable natural explanation. We could test for it and determine whether it explains the data. Given that Caputo was taken to court over the issue, presumably the possibility of insanity was considered. This doesn’t negate Dembski’s explanatory filter at all, but in fact supports it: In the regular modes that we detect design in the judicial system when finding people guilty of crimes, we always consider the possibility of insanity. Again, this is a testable possibility that just shows that we need to work hard and make sure we’ve explored reasonable natural explanations before inferring design (i.e., criminal guilt). And that’s exactly what our judicial system does.

Elsberry and Shallit: “(c) Caputo was molested by a Republican at an early age, and the resulting trauma has caused a pathological hatred of Republicans. He therefore tends to favor Democrats, but on one occasion a Republican bought him a beer immediately prior to the ballot assignment.”

Explanation (c), though of questionable good taste, is also a testable explanation. However, it’s not clear that Caputo’s hypothetical tragic history of abuse would be sufficient to explain Caputo’s behavior. The explanation may show that he favors Democrats — but we already knew he favored Democrats (he was a Democrat)! So even such a tragic history of abuse would not be sufficient to exonerate him for his unethical behavior. Caputo is still an agent acting freely. He could choose to allow his private leanings to influence ballot ordering or not. This might help explain his private internal leanings, but it does not excuse his free choice to fraudulently favor Republicans. “I was abused as a child,” though an emotionally compelling argument, is rarely a valid defense to a crime. This explanation does not negate design.

Elsberry and Shallit: “(d) Caputo attempted to make his choices randomly, using the flip of a fair coin, but unknown to him, on all but one occasion he accidently used a two-headed trick coin from his son’s magic chest. Furthermore, he was too dull-witted to remember assignments from previous ballots.”

Option (d) is outlandish to the point that we need not consider it because no election supervisor would be dumb enough to use a two-sided coin year after year. So while it is a possible law-based explanation, it’s one that is so unlikely that it is not tenable.

Elsberry and Shallit: “(e) Caputo himself is the product of a 3.8-billion-year-old evolutionary history involving both natural law and chance. The structure of Caputo’s neural network has been shaped by both this history and his environment since conception. Evolution has shaped humans to act in a way to increase their relative reproductive success, and one evolved strategy to increase this success is seeking and maintaining social status. Caputo’s status depended on his respect from other Democrats, and his neural network, with its limited look-ahead capabilities, evaluated a fitness function that resulted in the strategy of placing Democrats first in order to maximize this status.”

Option (e) is, again, intelligent design. Regardless of whether Caputo evolved or was designed, he’s still an intelligent agent capable of making moral choices — whether to “maximize his status” or obey the law, do his job, and serve democracy. So regardless of Caputo’s origin, if he’s a human being then his choices comprise intelligent design. If they don’t entail intelligent design, then every person ever convicted of a crime was just acting as a slave to his “neural network,” and there is no free will and no basis for criminal justice and no intelligent design of any kind. Do Elsberry and Shallit deny the existence of free will?

Each of their hypothetical additions to the Caputo example fit within Dembski’s methods of detecting design. None of them show that Dembski’s methods lead to false positives. None of them violate Dembski’s methods. And provided that we’re willing to (1) follow Dembski’s advice and search for reasonable natural explanations before inferring design, and (2) not deny the existence of free will, Dembski’s methods will come out with the right answer.

Finally, Elsberry and Shallit assert that “What is really remarkable about this list is both the breadth of Dembski’s claims and the complete and utter lack of quantitative justification for those claims.” Yet in this very example of Mr. Caputo, Dembski offered extensive calculations (see pp. 162-163 of The Design Inference). Likewise, in No Free Lunch, Dembski calculates the probability of a flagellum evolving (see Chapter 5) given its irreducibly complex nature.

V. Elsberry and Shallit Forget that We Must Study the Data Carefully When Inferring Design

A common theme in Elsberry and Shallit’s response to William Dembski is that they infer design before thinking carefully about whether the structure or event in question can in fact be explained by natural causes. They made this mistake with the Caputo example, and other examples, and they make the same mistake when they write that one of the “weakest points of Dembski’s arguments” is “if, as he suggests, design is always inferred simply by ruling out known hypotheses of chance and necessity, then any observed event with a sufficiently complicated or obscure causal history could be mistakenly assigned to design, either because we cannot reliable estimate the probabilities of each step of that causal history, or because the actual steps themselves are currently unknown.”

The charge is congruent with Elsberry and Shallit’s prior mistakes: They consistently seem unwilling to think hard to sort out possible causal histories and determine what the right answer should be (i.e., was an object or event designed or not?). Dembski would reply by saying that this is why we should study scenarios very carefully and not simply infer design until we have seriously explored the plausibility of natural explanations. This is partly why Dembski says we should not infer design unless the odds of the event occurring are below the “universal probability bound” (events unlikely to occur in the history of the universe given all known probabilistic resources). In Intelligent Design Uncensored, Dembski explains the universal probability bound:

Scientists have learned that within the known physical universe there are about 1080 elementary particles … Scientists also have learned that a change from one state of matter to another can’t happen faster than what physicists call the Planck time. … The Planck time is 1 second divided by 1045 (1 followed by forty-five zeroes). … Finally, scientists estimate that the universe is about fourteen billion years old, meaning the universe is itself is millions of times younger than 1025 seconds. If we now assume that any physical event in the universe requires the transition of at least one elementary particle (most events require far more, of course), then these limits on the universe suggest that the total number of events throughout cosmic history could not have exceed 1080 x 1045 x 1025 = 10150.

This means that any specified event whose probability is less than 1 chance in 10150 will remain improbable even if we let every corner and every moment of the universe roll the proverbial dice. The universe isn’t big enough, fast enough or old enough to roll the dice enough times to have a realistic chance of randomly generating specified events that are this improbable.

(William Dembski and Jonathan Witt, Intelligent Design Uncensored: An Easy-to-Understand Guide to the Controversy, pp. 68-69 (InterVarsity Press, 2010).)

Dembski argues that we need to think hard and search out the data carefully before concluding design: “To distinguish appearance from reality, the successful investigator must remain open to various possibilities and follow the evidence.” (Dembski & Witt, p. 47.) Calls for hard work and careful thinking before claiming there is evidence backing your theory is a healthy state for a science.

Returning to Elsberry and Shallit’s charge, they imply we cannot infer design if “obscure causal history.” But what would happen if we applied this standard to neo-Darwinian evolution or theories of chemical evolution?

For example, if neo-Darwinism were ruled out when stages of causal history were effectively lost, much of evolutionary biology would be lost as well. This would be a bad way to study historical science. Elsberry and Shallit write: “in the absence of a time machine, the causal story we develop can be justified only through circumstantial evidence. This is often the case in historical sciences.” That’s correct — and yet that’s exactly how ID works — making inferences from the historical record. In fact, their admission here undercuts many of their prior objections to ID. Thus, their later claim that Dembski’s method “will consistently assign design to events whose exact causal history is obscure” is incorrect because in many of those examples Dembski does the hard work, and finds that the evidence does not point to design. If he does infer design, it’s because he’s able to sufficiently study the causal history and present-day causes to determine that design is the best explanation. In contrast, it seems that Elsberry and Shallit aren’t interested in sufficiently investigating these causal histories. As this response has shown, in their attempt to argue ID is flawed, they consistently infer design prematurely before examining the viability of natural causes. Additionally, it seems that Elsberry and Shallit would effectively place neo-Darwinism in an unfalsifiable position by claiming that if the origin of a biological structure is “obscure” then we must stick to natural explanations. How about just saying “we don’t know”?

VI. Can Neo-Darwinian Processes Account for Complexity in Nature?

In their critique of Dembski, Elsberry and Shallit write, “there is abundant circumstantial evidence that Darwinian processes can account for complexity in nature, but Dembski excludes this evidence because it does not pass his video-camera certainty test.” This badly misrepresents Dembski’s argument. Looking at all the theoretical work Dembski is doing to test the ability of Darwinian processes to generate specified complexity (see his papers at www.evoinfo.org) it should be clear that Dembski is NOT demanding “video-camera certainty” but rather is willing to test the ability of present-day causes to generate high CSI empirically, and theoretically, and then apply his findings to make inferences from the historical record. That’s exactly how historical scientists ought to study these things.

But is their claim of “abundant circumstantial evidence that Darwinian processes can account for complexity in nature” correct?

Earlier this year I posted an article titled, “The NCSE, Judge Jones, and Bluffs About the Origin of New Functional Genetic Information,” which tried to answer this question. My article looked at standard papers cited by critics of ID when trying to establish that neo-Darwinian mechanisms can produce new functional genetic information. After a lengthy analysis of claims made by those papers, I concluded the following:

The NCSE’s (and Judge Jones’s) citation bluffs have not explained how neo-Darwinian mechanisms produce new functional biological information. Instead, the mechanisms invoked in these papers are vague and hypothetical at best:

  • exons may have been “recruited” or “donated” from other genes (and in some cases from an “unknown sou[r]ce”;
  • there were vague appeals to “extensive refashioning of the genome”;
  • mutations were said to cause “fortuitous juxtaposition of suitable sequences” in a gene-promoting region that therefore “evolve”;
  • researchers assumed “radical change in the structure” due to “rapid, adaptive evolution” and claimed that “positive selection has played an important role in the evolution” of the gene, even though function of the gene was not even known;
  • genes were purportedly “cobbled together from DNA of no related function (or no function at all)”;
  • the “creation” of new exons “from a unique noncoding genomic sequence that fortuitously evolved” was assumed, not demonstrated;
  • we were given alternatives that promoter regions arose from a “random genomic sequence that happens to be similar to a promoter sequence,” or that the gene arose because it was inserted by pure chance right next to a functional promoter.
  • explanations went little further than invoking “the chimeric fusion of two genes” based solely on sequence similarity;
  • when no source material is recognizable, we’re told that “genes emerge and evolve very rapidly, generating copies that bear little similarity to their ancestral precursors” because they are simply “hypermutable”;
  • we even saw “a striking case of convergent evolution” of “near-identical” proteins.

To reiterate, in no cases were the odds of these unlikely events taking place actually calculated. Incredibly, natural selection was repeatedly invoked in instances where the investigators did not know the function of the gene being studied and thus could not possibly have identified any known functional advantages gained through the mutations being invoked. In the case where multiple mutational steps were involved, no tests were done of the functional viability of the alleged intermediate stages. These papers offer vague stories but not viable, plausibly demonstrated explanations for the origin of new genetic information.

My article was originally posted on Evolution News & Views in a series of 8 parts, and Dr. Elsberry responded to the first part of my article, the introduction. One of his main points of contention was over my observation that a scientific paper Judge Jones cited in the Kitzmiller ruling to demonstrate “the origin of new genetic information by these evolutionary processes” did not even contain the word “information” in its body. (The paper was Manyuan Long, Esther Betrán, Kevin Thornton, and Wen Wang, “The Origin of New Genes: Glimpses from the Young and Old,” Nature Reviews Genetics, Vol. 4:865-875 (November, 2003).) Dr. Elsberry calls my charge “hypocrisy” but perhaps he is not aware of the background here, which shows that it is in fact Judge Jones who is using double standards.

In his ruling, Judge Jones repeatedly (and wrongly) claimed that ID had not published peer-reviewed scientific articles. A variety of these peer-reviewed scientific articles were documented to him during the course of the trial, including a 2004 paper that Darwin-doubting scientists Michael Behe and David Snoke published in the journal Protein Science. That paper cast doubt on the ability of gene-duplication to produce new functional protein-protein interactions. But Judge Jones dismissed Behe and Snoke’s article paper because “it does not mention either irreducible complexity or ID.”

While Judge Jones is correct that their article does not contain those words, the article does bear directly on those topics as it tests the complexity inherent in enzyme-substrate interactions. Even an anti-ID article in Science acknowledged that the evolution of protein-protein interactions bears on the question of irreducible complexity and the ID argument (See Christoph Adami, “Reducible Complexity,” Science, Vol. 312;61–63 (Apr. 7, 2006).) By Judge Jones’s standards, the lack of the exact phrases “intelligent design” or “irreducible complexity” should preclude one from arguing that the paper supports ID or irreducible complexity. But Judge Jones doesn’t hold evolutionists to the same standard.

What makes this ironic is that Judge Jones claimed that the review paper by Long et al., “The Origin of New Genes: Glimpses From the Young and Old,” accounted for “the origin of new genetic information by evolutionary processes” in a peer-reviewed scientific publication. Yet the body of this article does not even contain the word “information,” much less the phrase “new genetic information.” The word “information” appears once in the entire article — in the title of note 103. This reveals a double standard applied by Judge Jones to pro-evolution versus pro-ID papers as regards peer review.

I’m perfectly comfortable with someone citing Long et al. regarding the origin of new genetic information, even though it doesn’t contain the word ‘information.” Consistently, I think that Judge Jones’ accusation against Behe and Snoke’s paper is fallacious. I’m trying to be fair, and the fact that Long et al. does not contain the word “information” should NOT preclude it from bearing on the topic. Thus, I didn’t dismiss Long et al. but posted a lengthy 10,000+ word analysis of the paper. Wesley Elsberry attacks me for allegedly committing what he considers to be a hasty dismissal of this paper — but why doesn’t he jump on Judge Jones for wrongly dismissing Behe’s paper?

Instead, Wesley Elsberry writes:

Luskin hasn’t even gotten around to much more than a quote-mine, some projection, and a double dollop of hypocrisy. Nor do I have any expectation that the parts yet to be published will do any better than Luskin’s initial poor showing.

As of the posting of this response, Dr. Elsberry has responded to none of the rest of my substantive critique of Long et al. But I would encourage readers to my article and decide for themselves whether Elsberry’s premature criticisms are fair or charitable. In fact, when facing criticisms like these, perhaps now I better understand why William Dembski never felt compelled to write a lengthy response to Elsberry and Shallit’s old critique of his work in the first place. Were it not for the occasional inquiries I receive about their paper, I would not have written this response myself.

Casey Luskin

Associate Director and Senior Fellow, Center for Science and Culture
Casey Luskin is a geologist and an attorney with graduate degrees in science and law, giving him expertise in both the scientific and legal dimensions of the debate over evolution. He earned his PhD in Geology from the University of Johannesburg, and BS and MS degrees in Earth Sciences from the University of California, San Diego, where he studied evolution extensively at both the graduate and undergraduate levels. His law degree is from the University of San Diego, where he focused his studies on First Amendment law, education law, and environmental law.