sunlight-pierces-through-the-clouds-stockpack-adobe-stock.jpg
Sunlight pierces through the clouds
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The Act of Creation

Bridging Transcendence and Immanence Presented at Millstatt Forum, Strasbourg, France, 10 August 1998

Introduction

“Sing, O Goddess, the anger of Achilles son of Peleus, that brought countless ills upon the Achaeans.” In these opening lines of the Iliad, Homer invokes the Muse. For Homer the act of creating poetry is a divine gift, one that derives from an otherworldly source and is not ultimately reducible to this world. This conception of human creativity as a divine gift pervaded the ancient world, and was also evident among the Hebrews. In Exodus, for instance, we read that God filled the two artisans Bezaleel and Aholiab with wisdom so that they might complete the work of the tabernacle.

The idea that creative activity is a divine gift has largely been lost these days. To ask a cognitive scientist, for instance, what made Mozart a creative genius is unlikely to issue in an appeal to God. If the cognitive scientist embraces neuropsychology, he may suggest that Mozart was blessed with a particularly fortunate collocation of neurons. If he prefers an information processing model of mentality, he may attribute Mozart’s genius to some particularly effective computational modules. If he is taken with Skinner’s behaviorism, he may attribute Mozart’s genius to some particularly effective reinforcement schedules (perhaps imposed early in his life by his father Leopold). And no doubt, in all of these explanations the cognitive scientist will invoke Mozart’s natural genetic endowment. In place of a divine afflatus, the modern cognitive scientist explains human creativity purely in terms of natural processes.

Who’s right, the ancients or the moderns? My own view is that the ancients got it right. An act of creation is always a divine gift and cannot be reduced to purely naturalistic categories. To be sure, creative activity often involves the transformation of natural objects, like the transformation of a slab of marble into Michelangelo’s David. But even when confined to natural objects, creative activity is never naturalistic without remainder. The divine is always present at some level and indispensable.

Invoking the divine to explain an act of creation is, of course, wholly unacceptable to the ruling intellectual elite. Naturalism, the view that nature is the ultimate reality, has become the default position for all serious inquiry among our intellectual elite. From Biblical studies to law to education to science to the arts, inquiry is allowed to proceed only under the supposition that nature is the ultimate reality. Naturalism denies any divine element to the creative act. By contrast, the Christian tradition plainly asserts that God is the ultimate reality and that nature itself is a divine creative act. Within Christian theism, God is primary and fundamental whereas nature is secondary and derivative. Naturalism, by contrast, asserts that nature is primary and fundamental.

Theism and naturalism provide radically different perspectives on the act of creation. Within theism any act of creation is also a divine act. Within naturalism any act of creation emerges from a purely natural substrate — the very minds that create are, within naturalism, the result of a long evolutionary process that itself was not created. The aim of this talk, then, is to present a general account of creation that is faithful to the Christian tradition, that resolutely rejects naturalism, and that engages contemporary developments in science and philosophy.

The Challenge of Naturalism

Why should anyone want to understand the act of creation naturalistically? Naturalism, after all, offers fewer resources than theism. Naturalism simply gives you nature. Theism gives you not only nature, but also God and anything outside of nature that God might have created. The ontology of theism is far richer than that of naturalism. Why, then, settle for less?

Naturalists do not see themselves as settling for less. Instead, they regard theism as saddled with a lot of extraneous entities that serve no useful function. The regulative principle of naturalism is Occam’s razor. Occam’s razor is a principle of parsimony that requires eliminating entities that perform no useful function. Using Occam’s razor, naturalists attempt to slice away the superstitions of the past-and for naturalists the worst superstition of all is God. People used to invoke God to explain all sorts of things for which we now have perfectly good naturalistic explanations. Accordingly, God is a superstition that needs to be excised from our understanding of the world. The naturalists’ dream is to invent a theory of everything that entirely eliminates the need for God (Stephen Hawking is a case in point).

Since naturalists are committed to eliminating God from every domain of inquiry, let us consider how successfully they have eliminated God from the act of creation. Even leaving aside the creation of the world and focusing solely on human acts of creation, do we find that naturalistic categories have fully explained human creativity? Occam’s razor is all fine and well for removing stubble, but while we’re at it let’s make sure we don’t lop off a nose or ear. With respect to human creativity, let’s make sure that in eliminating God the naturalist isn’t giving us a lobotomized account of human creativity. Einstein once remarked that everything should be made as simple as possible but not simpler. In eliminating God from the act of creation, the naturalist needs to make sure that nothing of fundamental importance has been lost. Not only has the naturalist failed to provide this assurance, but there is good reason to think that any account of the creative act that omits God is necessarily incomplete and defective.

What does naturalism have to say about human acts of creation? For the moment let’s bracket the question of creativity and consider simply what it is for a human being to act. Humans are intelligent agents that act with intentions to accomplish certain ends. Although some acts by humans are creative, others are not. Georgia O’Keefe painting an iris is a creative act. Georgia O’Keefe flipping on a light switch is an act but not a creative act. For the moment, therefore, let us focus simply on human agency, leaving aside human creative agency.

How, then, does naturalism make sense of human agency? Although the naturalistic literature that attempts to account for human agency is vast, the naturalist’s options are in fact quite limited. The naturalist’s world is not a mind-first world. Intelligent agency is therefore in no sense prior to or independent of nature. Intelligent agency is neither sui generis nor basic. Intelligent agency is a derivative mode of causation that depends on underlying naturalistic — and therefore unintelligent — causes. Humans agency in particular supervenes on underlying natural processes, which in turn usually are identified with brain function.

It is important to distinguish the naturalist’s understanding of causation from the theist’s. Within theism God is the ultimate reality. Consequently, whenever God acts, there can be nothing outside of God that compels God’s action. God is not a billiard ball that must move when another billiard ball strikes it. God’s actions are free, and though he responds to his creation, he does not do so out of necessity. Within theism, therefore, divine action is not reducible to some more basic mode of causation. Indeed, within theism divine action is the most basic mode of causation since any other mode of causation involves creatures which themselves were created in a divine act.

Now consider naturalism. Within naturalism nature is the ultimate reality. Consequently, whenever something happens in nature, there can be nothing outside of nature that shares responsibility for what happened. Thus, when an event happens in nature, it is either because some other event in nature was responsible for it or because it simply happened, apart from any other determining event. Events therefore happen either because they were caused by other events or because they happened spontaneously. The first of these is usually called “necessity,” the second “chance.” For the naturalist chance and necessity are the fundamental modes of causation. Together they constitute what are called “natural causes.” Naturalism, therefore, seeks to account for intelligent agency in terms of natural causes.

How well have natural causes been able to account for intelligent agency? Cognitive scientists have achieved nothing like a full reduction. The French Enlightenment thinker Pierre Cabanis once remarked: “Les nerfs-voilà tout l’homme” (the nerves — that’s all there is to man). A full reduction of intelligent agency to natural causes would give a complete account of human behavior, intention, and emotion in terms of neural processes. Nothing like this has been achieved. No doubt, neural processes are correlated with behavior, intention, and emotion. Anger presumably is correlated with certain localized brain excitations. But localized brain excitations hardly explain anger any better than do overt behaviors associated with anger — like shouting obscenities.

Because cognitive scientists have yet to effect a full reduction of intelligent agency to natural causes, they speak of intelligent agency as supervening on natural causes. Supervenience is a hierarchical relationship between higher order processes (in this case intelligent agency) and lower order processes (in this case natural causes). What supervenience says is that the relationship between the higher and lower order processes is a one-way street, with the lower determining the higher. To say, for instance, that intelligent agency supervenes on neurophysiology is to say that once all the facts about neurophysiology are in place, all the facts about intelligent agency are determined as well. Supervenience makes no pretense at reductive analysis. It simply asserts that the lower level determines the higher level — how it does it, we don’t know.

Supervenience is therefore an insulating strategy, designed to protect a naturalistic account of intelligent agency until a full reductive explanation is found. Supervenience, though not providing a reduction, tells us that in principle a reduction exists. Given that nothing like a full reductive explanation of intelligent agency is at hand, why should we think that such a reduction is even possible? To be sure, if we knew that naturalism were correct, then supervenience would follow. But naturalism itself is at issue.

Neuroscience, for instance, is nowhere near achieving its ambitions, and that despite its strident rhetoric. Hardcore neuroscientists, for instance, refer disparagingly to the ordinary psychology of beliefs, desires, and emotions as “folk psychology.” The implication is that just as “folk medicine” had to give way to “real medicine,” so “folk psychology” will have to give way to a revamped psychology that is grounded in neuroscience. In place of talking cures that address our beliefs, desires, and emotions, tomorrow’s healers of the soul will manipulate brains states directly and ignore such outdated categories as beliefs, desires, and emotions.

At least so the story goes. Actual neuroscience research has yet to keep pace with its vaulting ambition. That should hardly surprise us. The neurophysiology of our brains is incredibly plastic and has proven notoriously difficult to correlate with intentional states. For instance, Louis Pasteur, despite suffering a cerebral accident, continued to enjoy a flourishing scientific career. When his brain was examined after he died, it was discovered that half the brain had completely atrophied. How does one explain a flourishing intellectual life despite a severely damaged brain if mind and brain coincide?

Or consider a still more striking example. The December 12th, 1980 issue of Science contained an article by Roger Lewin titled “Is Your Brain Really Necessary?” In the article, Lewin reported a case study by John Lorber, a British neurologist and professor at Sheffield University. I quote from the article:

“There’s a young student at this university,” says Lorber, “who has an IQ of 126, has gained a first-class honors degree in mathematics, and is socially completely normal. And yet the boy has virtually no brain.” [Lewin continues:] The student’s physician at the university noticed that the youth had a slightly larger than normal head, and so referred him to Lorber, simply out of interest. “When we did a brain scan on him,” Lorber recalls, “we saw that instead of the normal 4.5-centimeter thickness of brain tissue between the ventricles and the cortical surface, there was just a thin layer of mantle measuring a millimeter or so. His cranium is filled mainly with cerebrospinal fluid.”

Against such anomalies, Cabanis’s dictum, “the nerves — that’s all there is to man,” hardly inspires confidence. Yet as Thomas Kuhn has taught us, a science that is progressing fast and furiously is not about to be derailed by a few anomalies. Neuroscience is a case in point. For all the obstacles it faces in trying to reduce intelligent agency to natural causes, neuroscience persists in the Promethean determination to show that mind does ultimately reduce to neurophysiology. Absent a prior commitment to naturalism, this determination will seem misguided. On the other hand, given a prior commitment to naturalism, this determination is readily understandable.

Understandable yes, obligatory no. Most cognitive scientists do not rest their hopes with neuroscience. Yes, if naturalism is correct, then a reduction of intelligent agency to neurophysiology is in principle possible. The sheer difficulty of even attempting this reduction, both experimental and theoretical, however, leaves many cognitive scientists looking for a more manageable field to invest their energies. As it turns out, the field of choice is computer science, and especially its subdiscipline of artificial intelligence (abbreviated AI). Unlike brains, computers are neat and precise. Also, unlike brains, computers and their programs can be copied and mass-produced. Inasmuch as science thrives on replicability and control, computer science offers tremendous practical advantages over neurological research.

Whereas the goal of neuroscience is to reduce intelligent agency to neurophysiology, the goal of artificial intelligence is to reduce intelligent agency to computer algorithms. Since computers operate deterministically, reducing intelligent agency to computer algorithms would indeed constitute a naturalistic reduction of intelligent agency. Should artificial intelligence succeed in reducing intelligent agency to computation, cognitive scientists would still have the task of showing in what sense brain function is computational (alternatively, Marvin Minsky’s dictum “the mind is a computer made of meat” would still need to be verified). Even so, the reduction of intelligent agency to computation would go a long way toward establishing a purely naturalistic basis for human cognition.

An obvious question now arises: Can computation explain intelligent agency? First off, let’s be clear that no actual computer system has come anywhere near to simulating the full range of capacities we associate with human intelligent agency. Yes, computers can do certain narrowly circumscribed tasks exceedingly well (like play chess). But require a computer to make a decision based on incomplete information and calling for common sense, and the computer will be lost. Perhaps the toughest problem facing artificial intelligence researchers is what’s called the frame problem. The frame problem is getting a computer to find the appropriate frame of reference for solving a problem.

Consider, for instance, the following story: A man enters a bar. The bartender asks, “What can I do for you?” The man responds, “I’d like a glass of water.” The bartender pulls out a gun and shouts, “Get out of here!” The man says “thank you” and leaves. End of story. What is the appropriate frame of reference? No, this isn’t a story by Franz Kafka. The key item of information needed to make sense of this story is this: The man has the hiccups. By going to the bar to get a drink of water, the man hoped to cure his hiccups. The bartender, however, decided on a more radical cure. By terrifying the man with a gun, the bartender cured the man’s hiccups immediately. Cured of his hiccups, the man was grateful and left. Humans are able to understand the appropriate frame of reference for such stories immediately. Computers, on the other hand, haven’t a clue.

Ah, but just wait. Give an army of clever programmers enough time, funding, and computational power, and just see if they don’t solve the frame problem. Naturalists are forever issuing such promissory notes, claiming that a conclusive confirmation of naturalism is right around the corner — just give our scientists a bit more time and money. John Polkinghorne refers to this practice as “promissory materialism.”

Confronted with such promises, what’s a theist to do? To refuse such promissory notes provokes the charge of obscurantism, but to accept them means suspending one’s theism. It is possible to reject promissory materialism without meriting the charge of obscurantism. The point to realize is that a promissory note need only be taken seriously if there is good reason to think that it can be paid. The artificial intelligence community has thus far offered no compelling reason for thinking that it will ever solve the frame problem. Indeed, computers that employ common sense to determine appropriate frames of reference continue utterly to elude computer scientists.

Given the practical difficulties of producing a computer that faithfully models human cognition, the hardcore artificial intelligence advocate can change tactics and argue on theoretical grounds that humans are simply disguised computers. The argument runs something like this. Human beings are finite. Both the space of possible human behaviors and the space of possible sensory inputs are finite. For instance, there are only so many distinguishable word combinations that we can utter and only so many distinguishable sound combinations that can strike our eardrums. When represented mathematically, the total number of human lives that can be distinguished empirically is finite. Now it is an immediate consequence of recursion theory (the mathematical theory that undergirds computer science) that any operations and relations on finite sets are computable. It follows that human beings can be represented computationally. Humans are therefore functionally equivalent to computers. QED.

This argument can be nuanced. For instance, we can introduce a randomizing element into our computations to represent quantum indeterminacy. What’s important here, however, is the gist of the argument. The argument asks us to grant that humans are essentially finite. Once that assumption is granted, recursion theory tells us that everything a finite being does is computable. We may never actually be able to build the machines that render us computable. But in principle we could given enough memory and fast enough processors.

It’s at this point that opponents of computational reductionism usually invoke Gödel’s incompleteness theorem. Gödel’s theorem is said to refute computational reductionism by showing that humans can do things that computers cannot — namely, produce a Gödel sentence. John Lucas made such an argument in the early 1960s, and his argument continues to be modified and revived. Now it is perfectly true that humans can produce Gödel sentences for computational systems external to themselves. But computers can as well be programmed to compute Gödel sentences for computational systems external to themselves. This point is seldom appreciated, but becomes evident from recursion-theoretic proofs of Gödel’s theorem (see, for example, Klaus Weihrauch’s Computability).

The problem, then, is not to find Gödel sentences for computational systems external to oneself. The problem is for an agent to examine oneself as a computational system and therewith produce one’s own Gödel sentence. If human beings are non-computational, then there won’t be any Gödel sentence to be found. If, on the other hand, human beings are computational, then, by Gödel’s theorem, we won’t be able to find our own Gödel sentences. And indeed, we haven’t. Our inability to translate neurophysiology into computation guarantees that we can’t even begin computing our Gödel sentences if indeed we are computational systems. Yes, for a computational system laid out before us we can determine its Gödel sentence. Nevertheless, we don’t have sufficient access to ourselves to lay ourselves out before ourselves and thereby determine our Gödel sentences. It follows that neither Gödel’s theorem nor our ability to prove Gödel’s theorem shows that humans can do things that computers cannot.

Accordingly, Gödel’s theorem fails to refute the argument for computational reductionism based on human finiteness. To recap that argument, humans are finite because the totality of their possible behavioral outputs and possible sensory inputs is finite. Moreover, all operations and relations on finite sets are by recursion theory computable. Hence, humans are computational systems. This is the argument. What are we to make of it? Despite the failure of Gödel’s theorem to block its conclusion, is there a flaw in the argument?

Yes there is. The flaw consists in identifying human beings with their behavioral outputs and sensory inputs. Alternatively, the flaw consists in reducing our humanity to what can be observed and measured. We are more than what can be observed and measured. Once, however, we limit ourselves to what can be observed and measured, we are necessarily in the realm of the finite and therefore computable. We can only make so many observations. We can only take so many measurements. Moreover, our measurements never admit infinite gradations (indeed, there’s always some magnitude below which quantities become empirically indistinguishable). Our empirical selves are therefore essentially finite. It follows that unless our actual selves transcend our empirical selves, our actual selves will be finite as well — and therefore computational.

Roger Penrose understands this problem. In The Emperor’s New Mind and in his more recent Shadows of the Mind, he invokes quantum theory to underwrite a non-computational view of brain and mind. Penrose’s strategy is the same that we saw for Gödel’s theorem: Find something humans can do that computers can’t. There are plenty of mathematical functions that are non-computable. Penrose therefore appeals to quantum processes in the brain whose mathematical characterization employs non-computable functions.

Does quantum theory offer a way out of computational reductionism? I would say no. Non-computable functions are an abstraction. To be non-computable, functions have to operate on infinite sets. The problem, however, is that we have no observational experience of infinite sets or of the non-computable functions defined on them. Yes, the mathematics of quantum theory employs non-computable functions. But when we start plugging in concrete numbers and doing calculations, we are back to finite sets and computable functions.

Granted, we may find it convenient to employ non-computable functions in characterizing some phenomenon. But when we need to say something definite about the phenomenon, we must supply concrete numbers, and suddenly we are back in the realm of the computable. Non-computability exists solely as a mathematical abstraction — a useful abstraction, but an abstraction nonetheless. Precisely because our behavioral outputs and sensory inputs are finite, there is no way to test non-computability against experience. All scientific data are finite, and any mathematical operations we perform on that data are computable. Non-computable functions are therefore always dispensable, however elegant they may appear mathematically.

There is, however, still a deeper problem with Penrose’s program to eliminate computational reductionism. Suppose we could be convinced that there are processes in the brain that are non-computational. For Penrose they are quantum processes, but whatever form they take, as long as they are natural processes, we are still dealing with a naturalistic reduction of mind. Computational reductionism is but one type of naturalistic reductionism — certainly the most extreme, but by no means the only one. Penrose’s program offers to replace computational processes with quantum processes. Quantum processes, however, are as fully naturalistic as computational processes. In offering to account for mind in terms of quantum theory, Penrose is therefore still wedded to a naturalistic reduction of mind and intelligent agency.

It’s time to ask the obvious question: Why should anyone want to make this reduction? Certainly, if we have a prior commitment to naturalism, we will want to make it. But apart from that commitment, why attempt it? As we’ve seen, neurophysiology hasn’t a clue about how to reduce intelligent agency to natural causes (hence its continued retreat to concepts like supervenience, emergence, and hierarchy — concepts which merely cloak ignorance). We’ve also seen that no actual computational systems show any sign of reducing intelligent agency to computation. The argument that we are computational systems because the totality of our possible behavioral outputs and possible sensory inputs is finite holds only if we presuppose that we are nothing more than the sum of those behavioral outputs and sensory inputs. So too, Penrose’s argument that we are naturalistic systems because some well-established naturalistic theory (in this case quantum theory) characterizes our neurophysiology holds only if the theory does indeed accurately characterize our neurophysiology (itself a dubious claim given the frequency with which scientific theories are overturned) and so long as we presuppose that we are nothing more than a system characterized by some naturalistic theory.

Bottom line: The naturalistic reduction of intelligent agency is not the conclusion of an empirically-based evidential argument, but merely a straightforward consequence of presupposing naturalism in the first place. Indeed, the empirical evidence for a naturalistic reduction of intelligent agency is wholly lacking. For instance, nowhere does Penrose write down the Schroedinger equation for someone’s brain, and then show how actual brain states agree with brain states predicted by the Schroedinger equation. Physicists have a hard enough time writing down the Schroedinger equation for systems of a few interacting particles. Imagine the difficulty of writing down the Schroedinger equation for the multi-billion neurons that constitute each of our brains. It ain’t going to happen. Indeed, the only thing these naturalistic reductions of intelligent agency have until recently had in their favor is Occam’s razor. And even this naturalistic mainstay is proving small comfort. Indeed, recent developments in the theory of intelligent design are showing that intelligent agency cannot be reduced to natural causes. Let us now turn to these developments.

The Resurgence of Design

In arguing against computational reductionism, both John Lucas and Roger Penrose attempted to find something humans can do that computers cannot. For Lucas, it was to construct a Gödel sentence. For Penrose, it was finding in neurophysiology a non-computational quantum process. Neither of these refutations succeed against computational reductionism, much less against a general naturalistic reduction of intelligent agency. Nevertheless, the strategy underlying these attempted refutations is sound, namely, to find something intelligent agents can do that natural causes cannot. We don’t have to look far. All of us attribute things to intelligent agents that we wouldn’t dream of attributing to natural causes. For instance, natural causes can throw scrabble pieces on a board, but cannot arrange the pieces into meaningful sentences. To obtain a meaningful arrangement requires an intelligent agent.

This intuition, that natural causes are too stupid to do the things that intelligent agents are capable of, has underlain the design arguments of past centuries. Throughout the centuries theologians have argued that nature exhibits features which nature itself cannot explain, but which instead require an intelligence that transcends nature. From Church fathers like Minucius Felix and Basil the Great (third and fourth centuries) to medieval scholastics like Moses Maimonides and Thomas Aquinas (twelfth and thirteenth centuries) to reformed thinkers like Thomas Reid and Charles Hodge (eighteenth and nineteenth centuries), we find theologians making design arguments, arguing from the data of nature to an intelligence operating over and above nature.

Design arguments are old hat. Indeed, design arguments continue to be a staple of philosophy and religion courses. The most famous of the design arguments is William Paley’s watchmaker argument. According to Paley, if we find a watch in a field, the watch’s adaptation of means to ends (that is, the adaptation of its parts to telling time) ensures that it is the product of an intelligence, and not simply the output of undirected natural processes. So too, the marvelous adaptations of means to ends in organisms, whether at the level of whole organisms, or at the level of various subsystems (Paley focused especially on the mammalian eye), ensure that organisms are the product of an intelligence.

Though intuitively appealing, Paley’s argument had until recently fallen into disuse. This is now changing. In the last five years design has witnessed an explosive resurgence. Scientists are beginning to realize that design can be rigorously formulated as a scientific theory. What has kept design outside the scientific mainstream these last hundred and forty years is the absence of a precise criterion for distinguishing intelligent agency from natural causes. For design to be scientifically tenable, scientists have to be sure they can reliably determine whether something is designed. Johannes Kepler, for instance, thought the craters on the moon were intelligently designed by moon dwellers. We now know that the craters were formed naturally. It’s this fear of falsely attributing something to design only to have it overturned later that has prevented design from entering science proper. With a precise criterion for discriminating intelligently from unintelligently caused objects, scientists are now able to avoid Kepler’s mistake.

Before examining this criterion, I want to offer a brief clarification about the word “design.” I’m using “design” in three distinct senses. First, I use it to denote the scientific theory that distinguishes intelligent agency from natural causes, a theory that increasingly is being referred to as “design theory” or “intelligent design theory” (IDT). Second, I use “design” to denote what it is about intelligently produced objects that enables us to tell that they are intelligently produced and not simply the result of natural causes. When intelligent agents act, they leave behind a characteristic trademark or signature. The scholastics used to refer to the “vestiges of creation.” The Latin vestigium means footprint. It was thought that God, though not physically present, left his footprints throughout creation. Hugh Ross has referred to the “fingerprint of God.” It is “design” in this sense — as a trademark, signature, vestige, or fingerprint — that our criterion for discriminating intelligently from unintelligently caused objects is meant to identify. Lastly, I use “design” to denote intelligent agency itself. Thus, to say that something is designed is to say that an intelligent agent caused it.

Let us now turn to my advertised criterion for discriminating intelligently from unintelligently caused objects. Although a detailed treatment of this criterion is technical and appears in my book The Design Inference, the basic idea is straightforward and easily illustrated. Consider how the radio astronomers in the movie Contact detected an extra-terrestrial intelligence. This movie, which came out last summer and was based on a novel by Carl Sagan, was an enjoyable piece of propaganda for the SETI research program — the Search for Extra-Terrestrial Intelligence. To make the movie interesting, the SETI researchers had to find an extra-terrestrial intelligence (the actual SETI program has yet to be so fortunate).

How, then, did the SETI researchers in Contact find an extra-terrestrial intelligence? To increase their chances of finding an extra-terrestrial intelligence, SETI researchers have to monitor millions of radio signals from outer space. Many natural objects in space produce radio waves. Looking for signs of design among all these naturally produced radio signals is like looking for a needle in a haystack. To sift through the haystack, SETI researchers run the signals they monitor through computers programmed with pattern-matchers. So long as a signal doesn’t match one of the pre-set patterns, it will pass through the pattern-matching sieve. If, on the other hand, it does match one of those patterns, then, depending on the pattern matched, the SETI researchers may have cause for celebration.

The SETI researchers in Contact did find a signal worthy of celebration, namely the sequence of prime numbers from 2 to 101, represented as a series of beats and pauses (2 = beat-beat-pause; 3 = beat-beat-beat-pause; 5 = beat-beat-beat-beat-beat-pause; etc.). The SETI researchers in Contact took this signal as decisive confirmation of an extra-terrestrial intelligence. What is it about this signal that warrants us inferring design? Whenever we infer design, we must establish two things — complexity and specification. Complexity ensures that the object in question is not so simple that it can readily be explained by natural causes. Specification ensures that this object exhibits the type of pattern that is the signature of intelligence.

To see why complexity is crucial for inferring design, consider what would have happened if the SETI researchers had simply witnessed a single prime number — say the number 2 represented by two beats followed by a pause. It is a sure bet that no SETI researcher, if confronted with this three-bit sequence (beat-beat-pause), is going to contact the science editor at the New York Times, hold a press conference, and announce that an extra-terrestrial intelligence has been discovered. No headline is going to read, “Aliens Master the Prime Number Two!”

The problem is that two beats followed by a pause is too short a sequence (that is, has too little complexity) to establish that an extra-terrestrial intelligence with knowledge of prime numbers produced it. A randomly beating radio source might by chance just happen to output the sequence beat-beat-pause. The sequence of 1126 beats and pauses required to represent the prime numbers from 2 to 101, however, is a different story. Here the sequence is sufficiently long (that is, has enough complexity) to confirm that an extra-terrestrial intelligence could have produced it.

Even so, complexity by itself isn’t enough to eliminate natural causes and detect design. If I flip a coin 1000 times, I’ll participate in a highly complex (or what amounts to the same thing, highly improbable) event. Indeed, the sequence I end up flipping will be one of 10300 possible sequences. This sequence, however, won’t trigger a design inference. Though complex, it won’t exhibit a pattern characteristic of intelligence. In contrast, consider the sequence of prime numbers from 2 to 101. Not only is this sequence complex, but it also constitutes a pattern characteristic of intelligence. The SETI researcher who in the movie Contact first noticed the sequence of prime numbers put it this way: “This isn’t noise, this has structure.”

What makes a pattern characteristic of intelligence and therefore suitable for detecting design? The basic intuition distinguishing patterns that alternately succeed or fail to detect design is easily motivated. Consider the case of an archer. Suppose an archer stands fifty meters from a large wall with bow and arrow in hand. The wall, let’s say, is sufficiently large that the archer cannot help but hit it. Now suppose each time the archer shoots an arrow at the wall, the archer paints a target around the arrow so that the arrow sits squarely in the bull’s-eye. What can be concluded from this scenario? Absolutely nothing about the archer’s ability as an archer. Yes, a pattern is being matched; but it is a pattern fixed only after the arrow has been shot. The pattern is thus purely ad hoc.

But suppose instead the archer paints a fixed target on the wall and then shoots at it. Suppose the archer shoots a hundred arrows, and each time hits a perfect bull’s-eye. What can be concluded from this second scenario? Confronted with this second scenario we are obligated to infer that here is a world-class archer, one whose shots cannot legitimately be referred to luck, but rather must be referred to the archer’s skill and mastery. Skill and mastery are of course instances of design.

The type of pattern where the archer fixes a target first and then shoots at it is common to statistics, where it is known as setting a rejection region prior to an experiment. In statistics, if the outcome of an experiment falls within a rejection region, the chance hypothesis supposedly responsible for the outcome is rejected. Now a little reflection makes clear that a pattern need not be given prior to an event to eliminate chance and implicate design. Consider, for instance, a cryptographic text that encodes a message. Initially it looks like a random sequence of letters. Initially we lack any pattern for rejecting natural causes and inferring design. But as soon as someone gives us the cryptographic key for deciphering the text, we see the hidden message. The cryptographic key provides the pattern we need for detecting design. Moreover, unlike the patterns of statistics, it is given after the fact.

Patterns therefore divide into two types, those that in the presence of complexity warrant a design inference and those that despite the presence of complexity do not warrant a design inference. The first type of pattern I call a specification, the second a fabrication. Specifications are the non-ad hoc patterns that can legitimately be used to eliminate natural causes and detect design. In contrast, fabrications are the ad hoc patterns that cannot legitimately be used to detect design. The distinction between specifications and fabrications can be made with full statistical rigor.

Complexity and specification together yield a criterion for detecting design. I call it the complexity-specification criterion. According to this criterion, we reliably detect design in something whenever it is both complex and specified. To see why the complexity-specification criterion is exactly the right instrument for detecting design, we need to understand what it is about intelligent agents that makes them detectable in the first place. The principal characteristic of intelligent agency is choice. Whenever an intelligent agent acts, it chooses from a range of competing possibilities.

This is true not just of humans, but of animals as well as of extra-terrestrial intelligences. A rat navigating a maze must choose whether to go right or left at various points in the maze. When SETI researchers attempt to discover intelligence in the extra-terrestrial radio transmissions they are monitoring, they assume an extra-terrestrial intelligence could have chosen any number of possible radio transmissions, and then attempt to match the transmissions they observe with certain patterns as opposed to others. Whenever a human being utters meaningful speech, a choice is made from a range of possible sound-combinations that might have been uttered. Intelligent agency always entails discrimination, choosing certain things, ruling out others.

Given this characterization of intelligent agency, the crucial question is how to recognize it. Intelligent agents act by making a choice. How then do we recognize that an intelligent agent has made a choice? A bottle of ink spills accidentally onto a sheet of paper; someone takes a fountain pen and writes a message on a sheet of paper. In both instances ink is applied to paper. In both instances one among an almost infinite set of possibilities is realized. In both instances a contingency is actualized and others are ruled out. Yet in one instance we ascribe agency, in the other chance.

What is the relevant difference? Not only do we need to observe that a contingency was actualized, but we ourselves need also to be able to specify that contingency. The contingency must conform to an independently given pattern, and we must be able independently to formulate that pattern. A random ink blot is unspecifiable; a message written with ink on paper is specifiable. Ludwig Wittgenstein in Culture and Value made essentially the same point: “We tend to take the speech of a Chinese for inarticulate gurgling. Someone who understands Chinese will recognize language in what he hears. Similarly I often cannot discern the humanity in man.”

In hearing a Chinese utterance, someone who understands Chinese not only recognizes that one from a range of all possible utterances was actualized, but is also able to specify the utterance as coherent Chinese speech. Contrast this with someone who does not understand Chinese. In hearing a Chinese utterance, someone who does not understand Chinese also recognizes that one from a range of possible utterances was actualized, but this time, because lacking the ability to understand Chinese, is unable to specify the utterance as coherent speech.

To someone who does not understand Chinese, the utterance will appear gibberish. Gibberish — the utterance of nonsense syllables uninterpretable within any natural language — always actualizes one utterance from the range of possible utterances. Nevertheless, gibberish, by corresponding to nothing we can understand in any language, also cannot be specified. As a result, gibberish is never taken for intelligent communication, but always for what Wittgenstein calls “inarticulate gurgling.”

This actualizing of one among several competing possibilities, ruling out the rest, and specifying the one that was actualized encapsulates how we recognize intelligent agency, or equivalently, how we detect design. Experimental psychologists who study animal learning and behavior have known this all along. To learn a task an animal must acquire the ability to actualize behaviors suitable for the task as well as the ability to rule out behaviors unsuitable for the task. Moreover, for a psychologist to recognize that an animal has learned a task, it is necessary not only to observe the animal making the appropriate discrimination, but also to specify this discrimination.

Thus to recognize whether a rat has successfully learned how to traverse a maze, a psychologist must first specify which sequence of right and left turns conducts the rat out of the maze. No doubt, a rat randomly wandering a maze also discriminates a sequence of right and left turns. But by randomly wandering the maze, the rat gives no indication that it can discriminate the appropriate sequence of right and left turns for exiting the maze. Consequently, the psychologist studying the rat will have no reason to think the rat has learned how to traverse the maze.

Only if the rat executes the sequence of right and left turns specified by the psychologist will the psychologist recognize that the rat has learned how to traverse the maze. Now it is precisely the learned behaviors we regard as intelligent in animals. Hence it is no surprise that the same scheme for recognizing animal learning recurs for recognizing intelligent agency generally, to wit: actualizing one among several competing possibilities, ruling out the others, and specifying the one chosen.

Note that complexity is implicit here as well. To see this, consider again a rat traversing a maze, but now take a very simple maze in which two right turns conduct the rat out of the maze. How will a psychologist studying the rat determine whether it has learned to exit the maze. Just putting the rat in the maze will not be enough. Because the maze is so simple, the rat could by chance just happen to take two right turns, and thereby exit the maze. The psychologist will therefore be uncertain whether the rat actually learned to exit this maze, or whether the rat just got lucky.

But contrast this now with a complicated maze in which a rat must take just the right sequence of left and right turns to exit the maze. Suppose the rat must take one hundred appropriate right and left turns, and that any mistake will prevent the rat from exiting the maze. A psychologist who sees the rat take no erroneous turns and in short order exit the maze will be convinced that the rat has indeed learned how to exit the maze, and that this was not dumb luck.

This general scheme for recognizing intelligent agency is but a thinly disguised form of the complexity-specification criterion. In general, to recognize intelligent agency we must observe a choice among competing possibilities, note which possibilities were not chosen, and then be able to specify the possibility that was chosen. What’s more, the competing possibilities that were ruled out must be live possibilities, and sufficiently numerous so that specifying the possibility that was chosen cannot be attributed to chance. In terms of complexity, this is just another way of saying that the range of possibilities is complex.

All the elements in this general scheme for recognizing intelligent agency (that is, choosing, ruling out, and specifying) find their counterpart in the complexity-specification criterion. It follows that this criterion formalizes what we have been doing right along when we recognize intelligent agency. The complexity-specification criterion pinpoints what we need to be looking for when we detect design.

The implications of the complexity-specification criterion are profound, not just for science, but also for philosophy and theology. The power of this criterion resides in its generality. It would be one thing if the criterion only detected human agency. But as we’ve seen, it detects animal and extra-terrestrial agency as well. Nor is it limited to intelligent agents that belong to the physical world. The fine-tuning of the universe, about which cosmologists make such a to-do, is both complex and specified and readily yields design. So too, Michael Behe’s irreducibly complex biochemical systems readily yield design. The complexity-specification criterion demonstrates that design pervades cosmology and biology. Moreover, it is a transcendent design, not reducible to the physical world. Indeed, no intelligent agent who is strictly physical could have presided over the origin of the universe or the origin of life.

Unlike design arguments of the past, the claim that transcendent design pervades the universe is no longer a strictly philosophical or theological claim. It is also a fully scientific claim. There exists a reliable criterion for detecting design — the complexity-specification criterion. This criterion detects design strictly from observational features of the world. Moreover, it belongs to probability and complexity theory, not to metaphysics and theology. And although it cannot achieve logical demonstration, it is capable of achieving statistical justification so compelling as to demand assent. When applied to the fine-tuning of the universe and the complex, information-rich structures of biology, it demonstrates a design external to the universe. In other words, the complexity-specification criterion demonstrates transcendent design.

This is not an argument from ignorance. Just as physicists reject perpetual motion machines because of what they know about the inherent constraints on energy and matter, so too design theorists reject any naturalistic reduction of specified complexity because of what they know about the inherent constraints on natural causes. Natural causes are too stupid to keep pace with intelligent causes. We’ve suspected this all along. Intelligent design theory provides a rigorous scientific demonstration of this longstanding intuition. Let me stress, the complexity-specification criterion is not a principle that comes to us demanding our unexamined acceptance — it is not an article of faith. Rather, it is the outcome of a careful and sustained argument about the precise interrelationships between necessity, chance, and design (for the details, please refer to my monograph The Design Inference).

Demonstrating transcendent design in the universe is a scientific inference, not a philosophical speculation. Once we understand the role of the complexity-specification criterion in warranting this inference, several things follow immediately: (1) Intelligent agency is logically prior to natural causation and cannot be reduced to it. (2) Intelligent agency is fully capable of making itself known against the backdrop of natural causes. (3) Any science that systematically ignores design is incomplete and defective. (4) Methodological naturalism, the view that science must confine itself solely to natural causes, far from assisting scientific inquiry actually stifles it. (5) The scientific picture of the world championed since the Enlightenment is not just wrong but massively wrong. Indeed, entire fields of inquiry, especially in the human sciences, will need to be rethought from the ground up in terms of intelligent design.

The Creation of the World

I want now to take stock and consider where we are in our study of the act of creation. In the phrase “act of creation,” so far I have focused principally on the first part of that phrase — the “act” part, or what I’ve also been calling “intelligent agency.” I have devoted much of my talk till now to contrasting intelligent agency with natural causes. In particular, I have argued that no empirical evidence supports the reduction of intelligent agency to natural causes. I have also argued that no good philosophical arguments support that reduction. Indeed, those arguments that do are circular, presupposing the very naturalism they are supposed to underwrite. My strongest argument against the sufficiency of natural causes to account for intelligent agency, however, comes from the complexity-specification criterion. This empirically-based criterion reliably discriminates intelligent agency from natural causes. Moreover, when applied to cosmology and biology, it demonstrates not only the incompleteness of natural causes, but also the presence of transcendent design.

Now, within Christian theology there is one and only one way to make sense of transcendent design, and that is as a divine act of creation. I want therefore next to focus on divine creation, and specifically on the creation of the world. My aim is to use divine creation as a lens for understanding intelligent agency generally. God’s act of creating the world is the prototype for all intelligent agency (creative or not). Indeed, all intelligent agency takes its cue from the creation of the world. How so? God’s act of creating the world makes possible all of God’s subsequent interactions with the world, as well as all subsequent actions by creatures within the world. God’s act of creating the world is thus the prime instance of intelligent agency.

Let us therefore turn to the creation of the world as treated in Scripture. The first thing that strikes us is the mode of creation. God speaks and things happen. There is something singularly appropriate about this mode of creation. Any act of creation is the concretization of an intention by an intelligent agent. Now in our experience, the concretization of an intention can occur in any number of ways. Sculptors concretize their intentions by chipping away at stone; musicians by writing notes on lined sheets of paper; engineers by drawing up blueprints; etc. But in the final analysis, all concretizations of intentions can be subsumed under language. For instance, a precise enough set of instructions in a natural language will tell the sculptor how to form the statue, the musician how to record the notes, and the engineer how to draw up the blueprints. In this way language becomes the universal medium for concretizing intentions.

In treating language as the universal medium for concretizing intentions, we must be careful not to construe language in a narrowly linguistic sense (for example, as symbol strings manipulated by rules of grammar). The language that proceeds from God’s mouth in the act of creation is not some linguistic convention. Rather, as John’s Gospel informs us, it is the divine Logos, the Word that in Christ was made flesh, and through whom all things were created. This divine Logos subsists in himself and is under no compulsion to create. For the divine Logos to be active in creation, God must speak the divine Logos. This act of speaking always imposes a self-limitation on the divine Logos. There is a clear analogy here with human language. Just as every English utterance rules out those statements in the English language that were not uttered, so every divine spoken word rules out those possibilities in the divine Logos that were not spoken. Moreover, just as no human speaker of English ever exhausts the English language, so God in creating through the divine spoken word never exhausts the divine Logos.

Because the divine spoken word always imposes a self-limitation on the divine Logos, the two notions need to be distinguished. We therefore distinguish Logos with a capital “L” (that is, the divine Logos) from logos with a small “l” (that is, the divine spoken word). Lacking a capitalization convention, the Greek New Testament employs logos in both senses. Thus in John’s Gospel we read that “the Logos was made flesh and dwelt among us.” Here the reference is to the divine Logos who incarnated himself in Jesus of Nazareth. On the other hand, in the First Epistle of Peter we read that we are born again “by the logos of God.” Here the reference is to the divine spoken word that calls to salvation God’s elect.

Because God is the God of truth, the divine spoken word always reflects the divine Logos. At the same time, because the divine spoken word always constitutes a self-limitation, it can never comprehend the divine Logos. Furthermore, because creation is a divine spoken word, it follows that creation can never comprehend the divine Logos either. This is why idolatry-worshipping the creation rather than the Creator-is so completely backwards, for it assigns ultimate value to something that is inherently incapable of achieving ultimate value. Creation, especially a fallen creation, can at best reflect God’s glory. Idolatry, on the other hand, contends that creation fully comprehends God’s glory. Idolatry turns the creation into the ultimate reality. We’ve seen this before. It’s called naturalism. No doubt, contemporary scientific naturalism is a lot more sophisticated than pagan fertility cults, but the difference is superficial. Naturalism is idolatry by another name.

We need at all costs to resist naturalistic construals of logos (whether logos with a capital “L” or a small “l”). Because naturalism has become so embedded in our thinking, we tend to think of words and language as purely contextual, local, and historically contingent. On the assumption of naturalism, humans are the product of a blind evolutionary process that initially was devoid not only of humans but also of any living thing whatsoever. It follows that human language must derive from an evolutionary process that initially was devoid of language. Within naturalism, just as life emerges from non-life, so language emerges from the absence of language.

Now it’s certainly true that human languages are changing, living entities — one has only to compare the King James version of the Bible with more recent translations into English to see how much our language has changed in the last 400 years. Words change their meanings over time. Grammar changes over time. Even logic and rhetoric change over time. What’s more, human language is conventional. What a word means depends on convention and can be changed by convention. For instance, there is nothing intrinsic to the word “automobile” demanding that it denote a car. If we go with its Latin etymology, we might just as well have applied “automobile” to human beings, who are after all “self-propelling.” There is nothing sacred about the linguistic form that a word assumes. For instance, “gift” in English means a present, in German it means poison, and in French it means nothing at all. And of course, words only make sense within the context of broader units of discourse like whole narratives.

For Christian theism, however, language is never purely conventional. To be sure, the assignment of meaning to a linguistic entity is conventional. Meaning itself, however, transcends convention. As soon as we stipulate our language conventions, words assume meanings and are no longer free to mean anything an interpreter chooses. The deconstructionist claim that “texts are indeterminable and inevitably yield multiple, irreducibly diverse interpretations” and that “there can be no criteria for preferring one reading to another” is therefore false. This is not to preclude that texts can operate at multiple levels of meaning and interpretation. It is, however, to say that texts are anchored to their meaning and not free to float about indiscriminately.

Deconstruction’s error traces directly to naturalism. Within naturalism, there is no transcendent realm of meaning to which our linguistic entities are capable of attaching. As a result, there is nothing to keep our linguistic usage in check save pragmatic considerations, which are always contextual, local, and historically contingent. The watchword for pragmatism is expedience, not truth. Once expedience dictates meaning, linguistic entities are capable of meaning anything. Not all naturalists are happy with this conclusion. Philosophers like John Searle and D. M. Armstrong try simultaneously to maintain an objective realm of meaning and a commitment to naturalism. They want desperately to find something more than pragmatic considerations to keep our linguistic usage in check. Insofar as they pull it off, however, they are tacitly appealing to a transcendent realm of meaning (take, for instance, Armstrong’s appeal to universals). As Alvin Plantinga has convincingly argued, objective truth and meaning have no legitimate place within a pure naturalism. Deconstruction, for all its faults, has this in its favor: it is consistent in its application of naturalism to the study of language.

By contrast, logos resists all naturalistic reductions. This becomes evident as soon as we understand what logos meant to the ancient Greeks. For the Greeks logos was never simply a linguistic entity. Today when we think “word,” we often think a string of symbols written on a sheet of paper. This is not what the Greeks meant by logos. Logos was a far richer concept for the Greeks. Consider the following meanings of logos from Liddell and Scott’s Greek-English Lexicon:

  • the word by which the inward thought is expressed (speech)
  • the inward thought or reason itself (reason)
  • reflection, deliberation (choice)
  • calculation, reckoning (mathematics)
  • account, consideration, regard (inquiry, -ology)
  • relation, proportion, analogy (harmony, balance)
  • a reasonable ground, a condition (evidence, truth)

Logos is therefore an exceedingly rich notion encompassing the entire life of the mind.

The etymology of logos is revealing. Logos derives from the root l-e-g. This root appears in the Greek verb lego, which in the New Testament typically means “to speak.” Yet the primitive meaning of lego is to lay; from thence it came to mean to pick up and gather; then to select and put together; and hence to select and put together words, and therefore to speak. As Marvin Vincent remarks in his New Testament word studies: “logos is a collecting or collection both of things in the mind, and of words by which they are expressed. It therefore signifies both the outward form by which the inward thought is expressed, and the inward thought itself, the Latin oratio and ratio: compare the Italian ragionare, ‘to think’ and ‘to speak’.”

The root l-e-g has several variants. We’ve already seen it as l-o-g in logos. But it also occurs as l-e-c in intellect and l-i-g in intelligent. This should give us pause. The word intelligent actually comes from the Latin rather than from the Greek. It derives from two Latin words, the preposition inter, meaning between, and the Latin (not Greek) verb lego, meaning to choose or select. The Latin lego stayed closer to its Indo-European root meaning than its Greek cognate, which came to refer explicitly to speech. According to its etymology, intelligence therefore consists in choosing between.

We’ve seen this connection between intelligence and choice before, namely, in the complexity-specification criterion. Specified complexity is precisely how we recognize that an intelligent agent has made a choice. It follows that the etymology of the word intelligent parallels the formal analysis of intelligent agency inherent in the complexity-specification criterion. The appropriateness of the phrase intelligent design now becomes apparent as well. Intelligent design is a scientific research program that seeks to understand intelligent agency by investigating specified complexity. But specified complexity is the characteristic trademark of choice. It follows that intelligent design is a thoroughly apt phrase, signifying that design is inferred precisely because an intelligent agent has done what only an intelligent agent can do, namely, make a choice.

If intelligent design is a thoroughly apt phrase, the same cannot be said for the phrase natural selection. The second word in this phrase, selection, is of course a synonym for choice. Indeed, the l-e-c in selection is a variant of the l-e-g that in the Latin lego means to choose or select, and that also appears as l-i-g in intelligence. Natural selection is therefore an oxymoron. It attributes the power to choose, which properly belongs only to intelligent agents, to natural causes, which inherently lack the power to choose. Richard Dawkins’s concept of the blind watchmaker follows the same pattern, negating with blind what is affirmed in watchmaker. That’s why Dawkins opens his book The Blind Watchmaker with the statement: “Biology is the study of complicated things that give the appearance of having been designed for a purpose.” Natural selection and blind watchmakers don’t yield actual design, but only the appearance of design.

Having considered the role of logos in creating the world, I want next to consider its role in rendering the world intelligible. To say that God through the divine Logos acts as an intelligent agent to create the world is only half the story. Yes, there is a deep and fundamental connection between God as divine Logos and God as intelligent agent-indeed, the very words logos and intelligence derive from the same Indo-European root. The world, however, is more than simply the product of an intelligent agent. In addition, the world is intelligible.

We see this in the very first entity that God creates — light. With the creation of light, the world becomes a place that is conceptualizable, and to which values can properly be assigned. To be sure, as God increasingly orders the world through the process of creation, the number of things that can be conceptualized increases, and the values assigned to things become refined. But even with light for now the only created entity, it is possible to conceptualize light, distinguish it from darkness, and assign a positive value to light, calling it good. The world is thus not merely a place where God’s intentions are fulfilled, but also a place where God’s intentions are intelligible. Moreover, that intelligibility is as much moral and aesthetic as it is scientific.

God, in speaking the divine Logos, not only creates the world but also renders it intelligible. This view of creation has far reaching consequences. For instance, the fact — value distinction dissolves opposite God’s act of creation — indeed, what is and what ought to be unite in God’s original intention at creation. Consider too Einstein’s celebrated dictum about the comprehensibility of the world. Einstein claimed: “The most incomprehensible thing about the world is that it is comprehensible.” This statement, so widely regarded as a profound insight, is actually a sad commentary on naturalism. Within naturalism the intelligibility of the world must always remain a mystery. Within theism, on the other hand, anything other than an intelligible world would constitute a mystery.

God speaks the divine Logos to create the world, and thereby renders the world intelligible. This fact is absolutely crucial to how we understand human language, and especially human language about God. Human language is a divine gift for helping us to understand the world, and by understanding the world to understand God himself. This is not to say that we ever comprehend God, as in achieving fixed, final, and exhaustive knowledge of God. But human language does enable us to express accurate claims about God and the world. It is vitally important for the Christian to understand this point. Human language is not an evolutionary refinement of grunts and stammers formerly uttered by some putative apelike ancestors. We are creatures made in the divine image. Human language is therefore a divine gift that mirrors the divine Logos.

Consider what this conception of language does to the charge that biblical language is hopelessly anthropomorphic. We continue to have conferences in the United States with titles like “Reimagining God.” The idea behind such titles is that all our references to God are human constructions and can be changed as human needs require new constructions. Certain feminist theologians, for instance, object to referring to God as father. God as father, we are told, is an outdated patriarchal way of depicting God that, given contemporary concerns, needs to be changed. “Father,” we are told is a metaphor co-opted from human experience and pressed into theological service. No. No. No. This view of theological language is hopeless and destroys the Christian faith.

The concept father is not an anthropomorphism, nor is referring to God as father metaphorical. All instances of fatherhood reflect the fatherhood of God. It’s not that we are taking human fatherhood and idealizing it into a divine father image à la Ludwig Feuerbach or Sigmund Freud. Father is not an anthropomorphism at all. It’s not that we are committing an anthropomorphism by referring to God as father. Rather, we are committing a “theomorphism” by referring to human beings as fathers. We are never using the word “father” as accurately as when we attribute it to God. As soon as we apply “father” to human beings, our language becomes analogical and derivative.

We see this readily in Scripture. Jesus enjoins us to call no one father except God. Certainly Jesus is not telling us never to refer to any human being as “father.” All of us have human fathers, and they deserve that designation. Indeed, the Fifth Commandment tells us explicitly to honor our human fathers. But human fathers reflect a more profound reality, namely, the fatherhood of God. Or consider how Jesus responds to a rich, young ruler who addresses him as “good master.” Jesus shoots back, “Why do you call me good? There is no one good except God.” Goodness properly applies to God. It’s not an anthropomorphism to call God good. The goodness we attribute to God is not an idealized human goodness. God defines goodness. When we speak of human goodness, it is only as subordinate to the divine goodness.

This view, that human language is a divine gift for understanding the world and therewith God, is powerfully liberating. No longer do we live in a Platonic world of shadows from which we must escape if we are to perceive the divine light. No longer do we live in a Kantian world of phenomena that bars access to noumena. No longer do we live in a naturalistic world devoid of transcendence. Rather, the world and everything in it becomes a sacrament, radiating God’s glory. Moreover, our language is capable of celebrating that glory by speaking truly about what God has wrought in creation.

The view that creation proceeds through a divine spoken word has profound implications not just for the study of human language, but also for the study of human knowledge, or what philosophers call epistemology. For naturalism, epistemology’s primary problem is unraveling Einstein’s dictum: “The most incomprehensible thing about the world is that it is comprehensible.” How is it that we can have any knowledge at all? Within naturalism there is no solution to this riddle. Theism, on the other hand, faces an entirely different problematic. For theism the problem is not how we can have knowledge, but why our knowledge is so prone to error and distortion. The Judeo-Christian tradition attributes the problem of error to the fall. At the heart of the fall is alienation. Beings are no longer properly in communion with other beings. We lie to ourselves. We lie to others. And others lie to us. Appearance and reality are out of sync. The problem of epistemology within the Judeo-Christian tradition isn’t to establish that we have knowledge, but instead to root out the distortions that try to overthrow our knowledge.

On the view that creation proceeds through a divine spoken word, not only does naturalistic epistemology have to go by the board, but so does naturalistic ontology. Ontology asks what are the fundamental constituents of reality. According to naturalism (and I’m thinking here specifically of the scientific naturalism that currently dominates Western thought), the world is fundamentally an interacting system of mindless entities (be they particles, strings, fields, or whatever). Mind therefore becomes an emergent property of suitably arranged mindless entities. Naturalistic ontology is all backwards. If creation and everything in it proceeds through a divine spoken word, then the entities that are created don’t suddenly fall silent at the moment of creation. Rather they continue to speak.

I look at a blade of grass and it speaks to me. In the light of the sun, it tells me that it is green. If I touch it, it tells me that it has a certain texture. It communicates something else to a chinch bug intent on devouring it. It communicates something else still to a particle physicist intent on reducing it to its particulate constituents. Which is not to say that the blade of grass does not communicate things about the particles that constitute it. But the blade of grass is more than any arrangement of particles and is capable of communicating more than is inherent in any such arrangement. Indeed, its reality derives not from its particulate constituents, but from its capacity to communicate with other entities in creation and ultimately with God himself.

The problem of being now receives a straightforward solution: To be is to be in communion, first with God and then with the rest of creation. It follows that the fundamental science, indeed the science that needs to ground all other sciences, is communication theory, and not, as is widely supposed an atomistic, reductionist, and mechanistic science of particles or other mindless entities, which then need to be built up to ever greater orders of complexity by equally mindless principles of association, known typically as natural laws. Communication theory’s object of study is not particles, but the information that passes between entities. Information in turn is just another name for logos. This is an information-rich universe. The problem with mechanistic science is that it has no resources for recognizing and understanding information. Communication theory is only now coming into its own. A crucial development along the way has been the complexity-specification criterion. Indeed, specified complexity is precisely what’s needed to recognize information.

Information — the information that God speaks to create the world, the information that continually proceeds from God in sustaining the world and acting in it, and the information that passes between God’s creatures — this is the bridge that connects transcendence and immanence. All of this information is mediated through the divine Logos, who is before all things and by whom all things consist (Colossians 1:17). The crucial breakthrough of the intelligent design movement has been to show that this great theological truth — that God acts in the world by dispersing information — also has scientific content. All information, whether divinely inputted or transmitted between creatures, is in principle capable of being detected via the complexity-specification criterion. Examples abound:

  • The fine-tuning of the universe and irreducibly complex biochemical systems are instances of specified complexity, and signal information inputted into the universe by God at its creation.
  • Predictive prophecies in Scripture are instances of specified complexity, and signal information inputted by God as part of his sovereign activity within creation.
  • Language communication between humans is an instance of specified complexity, and signals information transmitted from one human to another.

The positivist science of this and the last century was incapable of coming to terms with information. The science of the new millennium will not be able to avoid it. Indeed, we already live in an information age.

Creativity, Divine and Human

In closing this talk, I want to ask an obvious question: Why create? Why does God create? Why do we create? Although creation is always an intelligent act, it is much more than an intelligent act. The impulse behind creation is always to offer oneself as a gift. Creation is a gift. What’s more, it is a gift of the most important thing we possess — ourselves. Indeed, creation is the means by which a creator — divine, human, or otherwise — gives oneself in self-revelation. Creation is not the neurotic, forced self-revelation offered on the psychoanalyst’s couch. Nor is it the facile self-revelation of idle chatter. It is the self-revelation of labor and sacrifice. Creation always incurs a cost. Creation invests the creator’s life in the thing created. When God creates humans, he breathes into them the breath of life — God’s own life. At the end of the six days of creation God is tired — he has to rest. Creation is exhausting work. It is drawing oneself out of oneself and then imprinting oneself on the other.

Consider, for instance, the painter Vincent van Gogh. You can read all the biographies you want about him, but through it all van Gogh will still not have revealed himself to you. For van Gogh to reveal himself to you, you need to look at his paintings. As the Greek Orthodox theologian Christos Yannaras writes: “We know the person of van Gogh, what is unique, distinct and unrepeatable in his existence, only when we see his paintings. There we meet a reason (logos) which is his only and we separate him from every other painter. When we have seen enough pictures by van Gogh and then encounter one more, then we say right away: This is van Gogh. We distinguish immediately the otherness of his personal reason, the uniqueness of his creative expression.”

The difference between the arts and the sciences now becomes clear. When I see a painting by van Gogh, I know immediately that it is his. But when I come across a mathematical theorem or scientific insight, I cannot decide who was responsible for it unless I am told. The world is God’s creation, and scientists in understanding the world are simply retracing God’s thoughts. Scientists are not creators but discoverers. True, they may formulate concepts that assist them in describing the world. But even such concepts do not bear the clear imprint of their formulators. Concepts like energy, inertia, and entropy give no clue about who formulated them. Hermann Weyl and John von Neumann were both equally qualified to formulate quantum mechanics in terms of Hilbert spaces. That von Neumann, and not Weyl, made the formulation is now an accident of history. There’s nothing in the formulation that explicitly identifies von Neumann. Contrast this with a painting by van Gogh. It cannot be confused with a Monet.

The impulse to create and thereby give oneself in self-revelation need not be grand, but can be quite humble. A homemaker arranging a floral decoration engages in a creative act. The important thing about the act of creation is that it reveal the creator. The act of creation always bears the signature of the creator. It is a sad legacy of modern technology, and especially the production line, that most of the objects we buy no longer reveal their maker. Mass production is inimical to true creation. Yes, the objects we buy carry brand names, but in fact they are largely anonymous. We can tell very little about their maker. Compare this with God’s creation of the world. Not one tree is identical with another. Not one face matches another. Indeed, a single hair on your head is unique — there was never one exactly like it, nor will there ever be another to match it.

The creation of the world by God is the most magnificent of all acts of creation. It, along with humanity’s redemption through Jesus Christ, are the two key instances of God’s self-revelation. The revelation of God in creation is typically called general revelation whereas the revelation of God in redemption is typically called special revelation. Consequently, theologians sometimes speak of two books, the Book of Nature, which is God’s self-revelation in creation, and the Book of Scripture, which is God’s self-revelation in redemption. If you want to know who God is, you need to know God through both creation and redemption. According to Scripture, the angels praise God chiefly for two things: God’s creation of the world and God’s redemption of the world through Jesus Christ. Let us follow the angels’ example.

William A. Dembski

Founding and Senior Fellow, Center for Science and Culture, Distinguished Fellow, Walter Bradley Center for Natural and Artificial Intelligence
A mathematician and philosopher, Bill Dembski is the author/editor of more than 25 books as well as the writer of peer-reviewed articles spanning mathematics, engineering, biology, philosophy, and theology. With doctorates in mathematics (University of Chicago) and philosophy (University of Illinois at Chicago), Bill is an active researcher in the field of intelligent design. But he is also a tech entrepreneur who builds educational software and websites, exploring how education can help to advance human freedom with the aid of technology.