In Design Inference (Cambridge, 1998) I argue that specified complexity is a reliable empirical marker of intelligent design. A long sequence of random letters is complex without being specified. A short sequence of letters like “the,” “so,” or “a” is specified without being complex. A Shakespearean sonnet is both complex and specified. Thus in general, given an event, object, or structure, to convince ourselves that it is designed we need to show that it is improbable (i.e., complex) and suitably patterned (i.e., specified).
Not everyone agrees. Elliott Sober, for instance, holds that specified complexity is exactly the wrong instrument for detecting design (see his September 1999 review in Philosophy of Science titled “How Not to Detect Design”). In this piece I want to consider the main criticisms of specified complexity as a reliable empirical marker of intelligence, show how they fail, and argue that not only does specified complexity pinpoint how we detect design, but it is also our sole means for detecting design. Consequently, specified complexity is not just one of several ways for reinstating design in the natural sciences-it is the only way.
Specified complexity, as I explicate it in The Design Inference, belongs to statistical decision theory. Statistical decision theory attempts to set the ground rules for how to draw inferences for occurrences governed by probabilities. Now, statistical decision theorists have their own internal disputes about the proper definition of probability and the proper logic for drawing probabilistic inferences. It was therefore unavoidable that specified complexity should come in for certain technical criticisms simply because the field of statistical decision theory is itself so factionalized (cf. Bayesian vs. frequentist approaches to probability).
The approach I take follows the common statistical practice (popularized by Ronald Fisher) of rejecting a chance hypothesis if a sample appears in a prespecified rejection region. What my complexity-specification criterion does is extend this statistical practice in two ways: First, it generalizes the types of rejections regions by which chance is eliminated, namely, to what I call “specifications.” Second, it allows for the elimination of all relevant chance hypotheses for an event, rather than just a single one. This dual extension is entirely consistent with the approach to hypothesis testing most widely employed in the applied statistics literature, and certainly the first one taught in any introductory statistics course.
I cite these technical considerations to indicate that any substantive worry about specified complexity has to lie elsewhere. I am, for instance, aware of various counterproposals to my conditional independence and tractability conditions (i.e., the joint conditions defining what it is for a pattern to be a specification and thus suitable for identifying design–not all patterns are suitable in this way), which those making these proposals regard as more consonant with their approach to statistical decision theory. Such diversity of formulations is fully to be expected given the diversity of approaches to statistical decision theory. Consequently, the concept of specified complexity is likely to undergo considerable fine-tuning and reformulation in coming years.
The worry with specified complexity centers not with its precise technical formulation (though that is important), but with the jump from specified complexity to design. Here’s the worry. Specified complexity is a statistical notion. Design, as generally understood, is a causal notion. How, then, do the two connect? In
Although I regard these two arguments as utterly convincing, others regard them as less so. The problem–and Elliott Sober gives particularly apt expression to it–is that specified complexity by itself doesn’t tell us anything about how an intelligent designer might have produced an object we observe. Sober regards this as a defect. I regard it as a virtue. I’ll come back to why I think it is a virtue, but for the moment let’s consider this criticism on its own terms.
According to this criticism it is not enough to have a criterion that simply from certain features of an object infers to an intelligence responsible for those features. Rather, we must also be able to tell a causal story about how that intelligence produced those features. A precisely crafted Swiss chronometer, for instance, is, to be sure, complex and specified. But we also possess background knowledge about the Swiss watchmakers (i.e., intelligent designers) who manufactured the chronometer. Moreover, we know something about the causal process by which they manufactured it. Even in the case of a long sequence of prime numbers from outer space, though an extra-terrestrial intelligence presumed responsible for this sequence would be largely unknown, the causal process for producing this sequence would not be utterly obscure since humans generate such sequences all the time.
Contrast this now with a biological system, one that exhibits specified complexity, but for which we have no idea how an intelligent designer might have produced it. To employ specified complexity as a reliable indicator of design here seems to tell us nothing except that the object is designed. Moreover, when we examine the logic of detecting design via specified complexity, it looks purely eliminative. The “complexity” in “specified complexity” is a measure of improbability. Now probabilities are always assigned in relation to chance hypotheses. Thus, to establish specified complexity requires defeating a set of chance hypotheses. Specified complexity therefore seems at best to tell us what’s not the case, not what is the case. Couple this with a Darwinian mechanism that is widely touted as capable of generating specified complexity, and it is no wonder that the scientific community resists making specified complexity a universal criterion for intelligence.
Let us now examine this criticism. First, even though specified complexity is established via an eliminative argument, it is not fair to say that it is established via a purely eliminative argument. If the argument were purely eliminative, one might be justified in saying that the move from specified complexity to a designing intelligence constitutes an argument from ignorance. The fact is, however, that it takes considerable knowledge on our part to come up with the right patterns (specifications) for eliminating chance and inferring design. Because these patterns qua specifications are essential to identifying specified complexity, the inference from specified complexity to a designing intelligence is not purely eliminative and may appropriately be called a “design inference” since pattern, specification, and design are, after all, related concepts.
But this raises the obvious question about what is the connection between design as a statistical notion (i.e., specified complexity) and design as a causal notion (i.e., the action of a designing intelligence). Now it’s true that simply knowing that an object is complex and specified tells us nothing about its causal history. Even so, it’s not clear why this should be regarded as a defect of the concept. It might equally well be regarded as a virtue for enabling us neatly to separate whether something is designed from how it was produced. Once specified complexity tells us that something is designed, not only can we inquire into its production, but we can also rule out certain ways it could not have been produced (i.e., it could not have been produced solely by chance and necessity). A design inference does not avoid the problem of how a designing intelligence might have produced an object. It simply makes it a separate question.
The claim that design inferences are purely eliminative is false, and the claim that they provide no causal story is true but hardly relevant-causal stories must always be assessed on a case-by-case basis independently of general statistical considerations. So where is the problem in connecting design as a statistical notion (i.e., specified complexity) to design as a causal notion (i.e., the action of a designing intelligence), especially given the close parallels between specified complexity and choice as well as the absence of counterexamples in generating specified complexity apart from intelligence?
In fact, the absence of such counterexamples is very much under dispute. Indeed, if the criticism against specified complexity being a reliable empirical marker of intelligence is to succeed, it must be because specified complexity can be purchased apart from intelligence, and thus because there are counterexamples to specified complexity being generated by intelligence. Consider Sober’s reframing of William Paley’s famous watchmaker argument in his text Philosophy of Biology (Westview, 1993). Sober reframes it as an inference to the best explanation:
[Paley’s] argument involves comparing two different arguments-the first about a watch, the second about living things. We can represent the statements involved in the watch argument as follows:
A: The watch is intricate and well suited to the task of timekeeping.
W1: The watch is the product of intelligent design.
W2: The watch is the product of random physical processes.
Paley claims that P(A|W1) >> P(A|W2) [i.e., the probability of A given that W1 is the case is much bigger than the probability of A given that W2 is the case]. He then says that the same pattern of analysis applies to the following triplet of statements:
B: Living things are intricate and well-suited to the task of surviving and reproducing.
L1: Living things are the product of intelligent design.
L2: Living things are the product of random physical processes.
Paley argues that if you agree with him about the watch, you also should agree that P(B|L1) >> P(B|L2). Although the subject matters of the two arguments are different, their logic is the same. Both are inferences to the best explanation in which the Likelihood Principle [a statistical principle which says that for a set of competing hypotheses, the hypothesis that confers maximum probability on the data is the best explanation] is used to determine which hypothesis is better supported by the observations.
Sober casts the design argument as a perfectly reasonable argument. Moreover, if, as seems eminently reasonable, we interpret “intricate and well-suited to a given task” as a special case of specified complexity, then Sober seems to allow that specified complexity might signal design, now considered as a causal notion. Nevertheless, Sober rejects this design argument. Why? Enter Charles Darwin. Darwin threw a new hypothesis into the mix. In Paley’s day there were only two competing hypotheses to explain the data B (= Living things are intricate and well-suited to the task of surviving and reproducing). These hypotheses were L1 (= Living things are the product of intelligent design) and L2 (= Living things are the product of random physical processes). Given only L1 and L2, L1 is the clear winner. But with Darwin’s theory, Sober now has a third hypothesis, L3: Living things are the product of variation and selection.
According to Sober, once the playing field is increased to include the Darwinian hypothesis L3, L1 no longer fares very well. To be sure, L1 still explains the data B quite nicely. But it fails, according to Sober, to account for additional data, like the fossil record, suboptimality of design, and vestigial organs. Prior to Darwin, Paley had offered what was the best explanation of life. With Darwin, the best explanation shifted. Inference to the best explanation is inherently competitive. Best explanations are not best across all times and circumstances. Rather, they are best relative to the hypotheses currently available and the background information and auxiliary assumptions we use to evaluate those hypotheses.
What, then, is the problem with claiming that specified complexity is a reliable empirical marker of intelligence? The problem isn’t that establishing specified complexity assumes the form of an eliminative argument. Nor is the problem that specified complexity fails to identify a causal story. Instead, the problem is that specified complexity is supposed to miscarry by counterexample. In particular, the Darwinian mechanism is supposed to purchase specified complexity apart from a designing intelligence. But does it? In two of my recent posts to META I argued that the Darwinian mechanism-and indeed any non-telic mechanism-is incapable of generating specified complexity (see “Explaining Specified Complexity” and “Why Evolutionary Algorithms Cannot Explain Specified Complexity”). A discussion of this can be found in chapter 6 of my book Intelligent Design: The Bridge Between Science & Theology, InterVarsity). Note that the incapacity here is not a lack of imagination on the part of the design theorist (I’m not offering what Richard Dawkins calls “an argument from personal incredulity”), but the limitations inherent in any non-telic mechanism.
Although death by counterexample would certainly be a legitimate way for specified complexity to fail as a reliable empirical marker of intelligence, Sober suggests that there is still another way for it to fail. According to Sober this criterion fails as a rational reconstruction of how we detect design in common life. Instead, Sober proposes a likelihood analysis in which one compares competing hypotheses in terms of the probability they confer (cf. the passage from Sober quoted a few paragraphs back). Sober uses this likelihood analysis to model inference to the best explanation, a common mode of scientific reasoning. To be sure, this likelihood analysis is useful as a way of thinking about scientific explanation. But it hardly gets at the root of how we infer design. In particular, it doesn’t come to terms with specification, complexity, and their joint role in eliminating chance.
Take an event E that is the product of intelligent design, but for which we haven’t yet seen the relevant pattern that makes its design clear to us (take the SETI example where a long sequence of prime numbers reaches us from outer space, but suppose we haven’t yet seen that it is a sequence of prime numbers). Without that pattern we won’t be able to distinguish between P(E takes the form it does | E is the result of chance) and P(E takes the form it does | E is the result of design), and thus we won’t be able to infer design for E. Only once we see the pattern will we, on a likelihood analysis, be able to see that the latter probability is greater than the former. But what are the right sorts of patterns that allow us to see that? Not all patterns indicate design. What’s more, the pattern to which E conforms needs to be complex or else E could readily be referred to chance. We are back, then, to needing some account of complexity and specification. Thus a likelihood analysis that pits competing design and chance hypotheses against each other must itself presuppose the legitimacy of specified complexity as a reliable empirical marker of intelligence.
Consequently, if there is a way to detect design, specified complexity is it.