Share
Facebook
Twitter
Print
arroba Email

The End of Materialist Science

For more information about David Berlinski – his new books, video clips from interviews, and upcoming events – please visit his website at www.davidberlinski.org.

 

 

Simply the thing you are shall make you live. Old Spanish Proverb

FOR THE MOMENT, WE ARE ALL waiting for the gate of time to open. The heroic era of scientific exploration appears at an end, the large aching questions settled. An official ideology is everywhere in evidence and everywhere resisted. From the place where space and time are curved to the arena of the elementary particles, there is nothing but matter in one of its modes. Physicists are now pursuing the grand unified theory that will in one limpid gesture amalgamate the world’s four far-flung forces.

For all of its ambitiousness, it is hardly an inspiring view. And few have been inspired by it. “The more the universe seems comprehensible,” Steven Weinberg has written sourly, “the more it seems pointless.” Yet even as the system is said to be finished, with only the details to be put in place, a delicate system of subversion is at work, the very technology made possible by the sciences themselves undermining the foundations of the edifice, compromising its principles, altering its shape and the way it feels, conveying the immemorial message that the land is more fragrant than it seemed at sea.

Entombed in one century certain questions sometimes arise at the threshold of another, their vitality strangely intact, rather like one of the Haitian undead, hair floating and silvered eyes flashing. Complexity, the Reverend William Paley observed in the eighteenth century, is a property of things, one as notable as their shape or mass. But complexity, he went on to observe, is also a property that requires an explanation.

It was a shrewd, a pregnant, a strangely modern observation. The simple structures formed by the action of the waves along a beach–the shapely dunes, sea caves, the sparkling pattern of the perishable foam itself–may be explained by a lighthearted invocation of air, water, and wind. But the things that interest us and compel our fascination are different. The laws of matter and the laws of chance, these, Paley seemed to suggest, control the behavior of ordinary material objects, but nothing in nature suggests the appearance of a complicated artifact. Unlike things that are simple, complex objects are logically isolated, and so they are logically unexpected.

Paley offered examples that he had on hand, a pocket watch chiefly, but that watch, its golden bezel still glowing after all these years, Paley pulled across his ample paunch as an act of calculated misdirection. The target of his cunning argument lay elsewhere, with the world of biological artifacts: the orchid’s secret chamber, the biochemical cascade that stops the blood from flooding the body at a cut. These, too, are complex, infinitely more so than a watch. Today, with these extraordinary objects now open for dissection by the biological sciences, precisely the same inferential pattern that sweeps back from a complex human artifact to the circumstance of its design sweeps back from a complex biological artifact to the circumstance of its design.

What, then, is the origin of their complexity? This is Paley’s question.

It is ours as well. At century’s end, the great clock ticking resolutely, complexity seems more complex than ever. Along with chaos, it is the convenient explanation for every conceivable catastrophe; given the prevalence of catastrophes, it is much in vogue. Civil and computer codes are complex; so is the air transport system, diseases of the liver, and the law of torts. Ditto for modern automobile engines, even the mechanic at my shop complaining richly that “this here gizmo’s too complex for me, Mister B’linski.” Semiconductors are complex and so are the properties of silicon, Bose-Einstein condensates, and Japanese kanji. The life of the butterfly is marvelously complex, the thing born a caterpillar and fated thus to crawl upon the ground, immolated later in its own liquid, and then majestically reemerging as a radiant, multicolored insect, bright master of the air.

Everything in nature, the French mathematician Rene Thom observed with a certain irony, is complex in one way or another. And not the least of the problems with this forthright if overstated observation (some things, after all, must be simple if anything is to be complex) is the fact that in most cases we have no definition of complexity to which we can appeal, no sense, beyond the superficial, of what complexity means.

Garden of the Branching Forks

IN DEVELOPING HIS ARGUMENT, Paley drew–he intended to draw–the curtain of a connection between complexity and design, and so between complexity and intelligence. Whether inscribed on paper or recorded in computer code, a design is, after all, the physical overflow of intelligence itself, its trace in matter. A large, a general biological property, intelligence is exhibited in varying degrees by everything that lives. It is intelligence that immerses living creatures in time, allowing the cat and the cockroach alike to peep into the future and remember the past. The lowly paramecium is intelligent, learning gradually to respond to electrical shocks, this quite without a brain, let alone a nervous system. But like so many other psychological properties, intelligence remains elusive without an objective correlative, some public set of circumstances to which one can point with the intention of saying there, that is what intelligence is, or what intelligence is like.

The stony soil between mental and mathematical concepts is not usually thought efflorescent, but in the idea of an algorithm modern mathematics does offer an obliging witness to the very idea of intelligence. They arise, these objects, from an old, a wrinkled class of human artifacts, things so familiar in collective memory as to pass unnoticed. A simple recipe for boeuf a la mode is an algorithm; it serves to control the flow of time, breaking human action into small and manageable steps (braising, browning, skimming). Those maddeningly imprecise computer manuals, written half in Korean, it would seem, and half in English, they, too, are algorithms, although ones that are often badly organized.

The promotion of an algorithm from its humdrum historical antecedents to its modern incarnation was undertaken in the 1930s by a quartet of gifted mathematicians: Emil Post, a Polish American logician, doomed like Ovid to spend his years in exile, his most productive work undertaken at City College in New York; the great Kurt Gödel; Alonzo Church, the American Gothic; and, of course, the odd, utterly original Alan Turing, his spirit yet restless and uneasy more than forty years after his unhappy death (an enforced course of hormone therapy to reverse his homosexuality, an apple laced with cyanide).

The problem they faced was to give precise meaning to the concept of an effective procedure within mathematics. The essential idea of an algorithm blazes forth from any digital computer, the unfolding of genius having passed inexorably from G�del’s incompleteness theorem to Space Invaders rattling on an arcade Atari, a progression suggesting something both melancholy and exuberant about our culture.

The computer is a machine, and so belongs to the class of things in nature that do something, but the computer is also a device dividing itself into aspects: symbols set into software to the left, and the hardware needed to read, store and manipulate the software to the right. This division of labor is unique among man-made artifacts: It suggests the mind immersed within the brain, the soul within the body, the presence anywhere of spirit in matter. An algorithm is thus an ambidextrous artifact, residing at the heart of both artificial and human intelligence.

Computer science and the computational theory of mind appeal to precisely the same garden of branching forks to explain what computers do or what men can do or what in the tide of time they have done.

It is this circumstance that in the 1940s prompted Turing to ask whether computers could really think, a question that we have for more than forty years evidently found oddly compelling. We ask it time and again. Turing was prepared to settle his own question in favor of the computer if the machines could satisfactorily conceal their identity, insouciantly passing themselves off, when suitably disguised, as human beings. Every year the issue is somewhere joined, a collection of computer scientists facing a number of curtained wooden booths, trying to determine from the cryptic messages they are receiving–My name is Bertha and I am hungry for love–whether Bertha is a cleverly programmed machine or whether, warm and wet, Bertha is herself resident behind the curtain, stamping her large feet and hoping for a message in turn.

It is a deliciously comic spectacle. So, too, the contests between human and computer chess champions, as when, with his simian brow furrowed nobly, Gary Kasparov defended the honor of the human race against Deep Blue, a dedicated chess-playing computer with an engaging willingness to topple into tactical traps. Artificial intelligence? The idea provokes anxiety, of course, especially among those who shuffle symbols for a living (me, come to think of it), but there it is: If an ability to execute ordinary arithmetic operations is a form of thought, then intelligence has already been localized in a device that fits within the palm of one’s hand.

Everything else is a matter of detail. Or denial.

The Information

AN ALGORITHM IS A scheme for the manipulation of symbols–a garden of branching forks, but to say this is only to say what an algorithm does. Symbols do more than suffer themselves to be hustled around; they are there to offer their reflections of the world. They are instruments that convey information.

The most general of fungible commodities, information has become something shipped, organized, displayed, routed, stored, held, manipulated, disbursed, bought, sold, and exchanged. It is, I think, the first entirely abstract object to have become an item of trade, rather as if one of the Platonic forms were to become the subject of a public offering. The superbly reptilian Richard Dawkins has written of life as a river of information, one proceeding out of Eden, almost as if a digital flood had evacuated itself at its source. Somewhere in the wheat-stabbed fields of the American Midwest, a physicist has argued for a vision of reincarnation in which human beings may look forward to a resumption of their activities after death on the basis of simulation within a gigantic computer. Sexual pleasures are said to be unusually keen in the digital hereafter.

The American mathematician Claude Shannon gave the concept of information its modern form, endowing an old, a familiar idea with an unyielding but perspicacious mathematical structure. The date is 1948. His definition is one of the cornerstones in the arch of modern thought. Information, Shannon realized, is a property resident in symbols, his theory thus confined from the start by the artifice of words. What Shannon required was a way of incorporating information into the noble category of continuous properties that, like mass or distance, can be represented by real numbers. Philosophers have from time immemorial talked of symbols and signs, their theories turning in an endless circle. Shannon regarded their domain with a mathematician’s cold and cunning eye. He saw that symbols function in a human universe where things in themselves are glimpsed through a hot haze of confusion and doubt. Whatever else a message may do, its most important function is to relieve uncertainty, Rejoice, we conquer making clear that one side has lost, the other won, the contest finally a matter of fact. It is by means of this superb insight that Shannon was able to coordinate in a limpid circle what symbols do, the accordion of human emotions wheezing in at doubt and expanding outward at certainty, and the great, the classical concepts of the theory of probability. A simple binary symbol, its existence suspended between two equally likely states (on or off, zero or one, yes or no), holds latent the promise of resolving an initial state of uncertainty by half. The symbol’s information, measured in bits, is thus one half.

Simple, no? And so very elegant.

The promotion of information from an informal concept to the mathematical Big Time has provided another benefit, and that is to bring complexity itself into the larger community of properties that are fundamental because they are measurable. The essential idea is due, ecumenically, to the great Russian mathematician Andrey Kolmogorov and to Gregory Chaitin, an American student at City College in New York at the time of his discovery (the spirit of Emil Post no doubt acting ectoplasmically). The cynosure of their concerns lay with strings of symbols. Lines of computer code, and so inevitably algorithms, are the obvious examples, but descending gravely down a wide flight of stairs, their black hair swept up and clasped together by diamond brooches, Anna Karenina and Madame Bovary in the end reduce themselves to strings of symbols, the ravishing women vanishing into the words, and hence the symbols, that describe them. Adieu, Mesdames.

In such settings, Kolmogorov and Chaitin argued, complexity is allied to compressibility and then to length, a simple counting concept. Certain strings of symbols may be expressed, and expressed completely, by strings that are shorter than they are; they have some give. A string of ten H’s (H,H,H,H,H,H,H,H,H,H) is an example. It may be replaced by the command, given to a computer, say, to print the letter H ten times. Strings that have little or no give are what they are. No scheme for their compression is available. The obvious example is a random string–H,T,T,H,H,T,H,H,H,T, say, which I have generated by flipping a coin ten times. Kolmogorov and Chaitin identified the complexity of a string with the length of the shortest computer command capable of generating that string. This returns the discussion to the idea of information. It functions in this discussion (and everywhere else) as a massive gravitational object, exerting enormous influence on every other object in its conceptual field.

The definition of algorithmic complexity gives the appearance of turning the discussion in a circle, the complexity of things explained by an appeal to intelligence and intelligence explained, or at least exhibited, by an appeal to a community of concepts–algorithms, symbols, information–upon which a definition of complexity has been impressed. In fact, I am not so much moving aimlessly in a circle as descending slowly in a spiral, explaining the complexity of things by an appeal to the complexity of strings. Explanation is, perhaps, the wrong, the dramatically inflated word. Nothing has been explained by what has just been concluded. The alliance between complexity and intelligence that Paley saw through a dark glass remains in place, but the descending spiral has done only what descending spirals can do, and that is to convey a question downward.

A Station of the Cross

MOLECULAR BIOLOGY HAS revealed that whatever else a living creature may be–God’s creation, the locus in the universe of pity and terror, so much blubbery protoplasm–a living creature is also a combinatorial system, its organization controlled by a strange, a hidden and obscure text, one written in a biochemical code. It is an algorithm that lies at the humming heart of life, ferrying information from one set of symbols (the nucleic acids) to another (the proteins). An algorithm? How else to describe the intricacy of transcription, translation, and replication than by an appeal to an algorithm? For that matter, what else to call the quantity stored in the macromolecules than information? And if the macromolecules store information, they function in some sense as symbols.

We are traveling in all the old familiar circles. Nonetheless, molecular biology provides the first clear, the first resonant, answer to Paley’s question. The complexity of human artifacts, the things that human beings make, finds its explanation in human intelligence. The intelligence responsible for the construction of complex artifacts–watches, computers, military campaigns, federal budgets, this very essay–finds its explanation in biology. This may seem suspiciously as if one were explaining the content of one conversation by appealing to the content of another, and so, perhaps, it is, but at the very least, molecular biology represents a place lower on the spiral than the place from which we first started, the downward descent offering the impression of progress if only because it offers the impression of movement.

However invigorating it is to see the threefold pattern of algorithm, information, and symbol appear and reappear, especially on the molecular biological level, it is important to remember that we have little idea how the pattern is amplified. The explanation of complexity that biology affords is yet largely ceremonial. A living creature is, after all, a concrete, complex, and autonomous three-dimensional object, something or someone able to carry on its own affairs; it belongs to a world of its own–our world, as it happens, the world in which animals hunt and hustle, scratch themselves in the shedding sun, yawn, move about at will. The triple artifacts of algorithm, information, and symbol are abstract and one-dimensional, entirely static; they belong to the very different universe of symbolic forms. At the very heart of molecular biology, a great mystery is in evidence, as those symbolic forms bring an organism into existence, control its morphology and development, and slip a copy of themselves into the future.

These transactions hide a process never seen among purely physical objects, although one that is characteristic of the world in which orders are given and obeyed, questions asked and answered, promises made and kept. In that world, where computers hum and human beings attend to one another, intelligence is always relative to intelligence itself, systems of symbols gaining their point from having their point gained.

This is not a paradox. It is simply the way things are. Some two hundred years ago, the Swiss naturalist Charles Bonnet–a contemporary of Paley’s–asked for an account of the “mechanics which will preside over the formation of a brain, a heart, a lung, and so many other organs.” No account in terms of mechanics is yet available. Information passes from the genome to the organism. Something is given and something read; something is ordered and something done. But just who is doing the reading and who is executing the orders, this remains unclear.

The Cool Clean Place

THE TRIPLE CONCEPTS of algorithm, information, and symbol lie at the humming heart of life. How they promote themselves into an organism is something of a mystery, a part of the general mystery by which intelligence achieves its effects. But just how in the scheme of things did these superb symbolic instruments come into existence? Why should there be very complex informational macromolecules at all? We are looking further downward now, toward the laws of physics.

Darwin’s theory of evolution is widely thought to provide purely a materialistic explanation for the emergence and development of life. But even if this extravagant and silly claim is accepted at face value, no one suggests that theories of evolution are in any sense a fundamental answer to Paley’s question. One can very easily imagine a universe rather like the surface of Jupiter, a mass of flaming gases too hot or too insubstantial for the emergence or the flourishing of life. There is in the universe we inhabit a very cozy relationship between the fundamental structure of things and our own boisterous emergence on the scene, something remarked by every physicist. The theory of evolution is yet another station, a place to which complexity has been transferred and from which it must be transferred anew.

The fundamental laws of physics describe the clean, cool place from which the world’s complexity ultimately arises. What else besides those laws remains? They hold the promise of radical simplicity in a double sense.

Within the past quarter century, physicists have come to realize that theories of change may always be expressed in terms of the conservation of certain quantities. Where there is conservation, there is symmetry as well. The behavior of an ordinary triangle in space preserves three rotational symmetries, the vertices of the triangle simply changing positions until the topmost vertex is back where it started. And it preserves three reflectional symmetries as well, as when the triangle is flipped over its altitude. A theory of how the triangle changes its position in space is at the same time–it is one and the same thing–a theory of the quantities conserved by the triangle as it is being rotated or reflected. The appropriate object for the description of the triangle is a structure that exhibits the requisite symmetry. (Such are the groups.) The fundamental laws of physics–the province of gauge theories–achieve their effect by appealing to symmetries. The physicist looks upon a domain made simple because it’s made symmetrical. This sense of simplicity is a sense of simplicity in things; whether captured fully by the laws of nature or not, symmetry is an objective property of the real world, out there, in some sense of “out” and some sense of “there.”

And yet, the world in which we moan and mate is corrupt, fragmented, hopelessly asymmetrical, with even the bilateral symmetry of the human body compromised by two kidneys but one comically misplaced heart, two lungs, one oblate liver. This, too, the fundamental laws of physics explain, those laws miraculously accounting for both purity and corruption in one superb intellectual gesture. The basic expressions of mathematical physics are mathematical equations. Like the familiar equations of high school algebra, they point to an unknown by means of an adroit compilation of connected clues–specification in the dark.

But while the equations respect nature’s perfect symmetries, their various solutions do not. How can this be? Simple. The equation that some number when squared is four has two matched and thus symmetrical solutions (two and minus two). The equation preserves a certain symmetry, but once a particular solution is chosen, some unavoidable bias is introduced, whether positive or negative. So too with the fundamental equations of physics. Their bias is revealed in spontaneous symmetry-breaking–our own breathtakingly asymmetrical world the result of a kind of random tilt, one that obscures the world’s symmetries even as it breaks them.

Such is the standard line, its principles governing the standard model, the half-completed collection of principles that, like a trusted scout, physicists believe point toward the grand unified theory to come. The laws of nature are radically simple in a second sense. They are, those laws, simple in their structure, exhibiting a shapeliness of mathematical form and a compactness of expression that itself cannot be improved in favor of anything shapelier or more compact. They represent the hard knot into which the world of matter has been compressed. This is to return the discussion to symbols and information, the cross of words throwing a queer but illuminating lurid red light onto the laws of physics.

The fundamental laws of physics capture the world’s patterns by capturing the play of its symmetries. Where there is pattern and symmetry, there is room for compression, and where room for compression, fundamental laws by which the room is compressed. At the conceptual basement, no further explanation is possible and so no further compression. The fundamental laws of physics are complex in that they are incompressible, but they are simple in that they are short–astonishingly so in that they may be programmed in only a few pages of computer code. They function as tense, tight spasms of information.

It is with the fundamental laws of physics that finally Paley’s question comes to an end; it is the place where Paley’s question must come to an end if we are not to pass with infinite weariness from one set of complicated facts to another. It is thus crucial that they be simple, those laws, so that in surveying what they say, the physicist is tempted no longer to ask for an account of their complexity. Like the mind of God, they must explain themselves. At the same time, they must be complete, explaining everything that is complex. Otherwise what is their use? And, finally, the fundamental laws must be material, offering an account of spirit and substance, form and function, all of the insubstantial aspects of reality, in terms (metaphorically) of atoms and the void. Otherwise they would not be fundamental laws of physics.

Retro

TO THOSE COMMITTED to its success, contemporary physics has seemed a convergent sequence, a fantastic synthesis of a scientific theory and a moral drama. The grand unified theory is not only a fixed point but a place of blessed if somewhat delayed release.

And yet many of the most notable movements in twentieth-century thought suggest something rather different, the conclusion of almost every grand intellectual progression blocked by incompleteness, or daunting complexity, or a wilderness of mirrors.

Any mathematical system rich enough to express ordinary arithmetic, Gödel demonstrated in 1931, is incomplete, some simple arithmetic proposition lying beyond the system’s reach. In a result only slightly less fantastic, the Polish logician Alfred Tarski demonstrated that the concept of truth could not be defined within any language in which it is expressed. For that, a retreat to a richer language is required. That language, in turn, requires a still further language for the requisite definition, and so upward along an endless progression, the complete definition of truth beckoning forever and forever out of reach.

Until recently, the physical sciences have seemed immune to the retrograde currents so conspicuous elsewhere. But much against their will, physicists have come to suspect that there is a three-way tug, a kind of tension between materialism, on the one hand, and the idea of an unutterably complete but superbly simple theory, on the other.

It is at the far margins of speculation that, like oil, anxiety is leaking from the great speculative structures. The universe, so current theory runs, erupted into existence in an explosion. One minute there is nothing, the next, poof . . . something arises out of nothing, as space and time come into existence and the universe goes on with the business of getting on.

An explosion? An explosion in what?

Nothing.

Nothing?

That’s right. Nada. Just as there is no point north of the geomagnetic North Pole, Stephen Hawking has observed with some asperity, so there is no time prior to the big bang, no space within which the big bang took place.

Thoughtful men and women have from time immemorial scrupled at this scenario; theirs is an ancient argument. Something cannot arise from nothing. At the place where the event occurs, the intellect simply goes blank. Explanations rarely help. “In the latest version of string theories,” Steven Weinberg writes, “space and time arise as derived quantities, which do not appear in the fundamental equations of the theory.”

This is odd, but what follows is odder yet. “In these theories, space and time have only an approximate significance; it makes no sense to talk about any time closer to the big bang than about a million trillion trillion trillionth of a second.” This lighthearted remark suggests that within a certain interval of time it makes no sense to talk of time at all. The effect is rather as if a genial host were loudly to assure a guest that his money is no good while simultaneously endeavoring to take it. Still other physicists repose their confidence in the laws of physics, attributing a strange mystic power to the symbols themselves. Existing in the same undiscoverable realm as Plato’s forms, they nonetheless function, those laws, to bring the world into existence. This may seem an exercise in metaphysics of just the sort that, with a fierce wet snort, some physicists routinely deride; it is what other physicists commend. The origin of the big bang lies with the laws of physics, Paul Davies insists, and “the laws of physics don’t exist in space and time at all. They describe the world, they are not ‘in’ it.”

The image of the fundamental laws of physics zestfully wrestling with the void to bring the universe into being is one that suggests very little improvement over the accounts given by the ancient Norse in which the world is revealed to be balanced on the back of a gigantic ox.

If this is how explanations come to an end, what of materialism? The laws of physics are sets of symbols, after all–these now vested with the monstrous power to bring things and urges into creation, and symbols belong to the intelligence-infused aspects of the universe. They convey information, they carry meaning, they are part of the human exchange. And yet, it was to have been intelligence itself (and thus complexity) that the laws of physics were to explain, symbols and signs, meanings and messages–all accounted for by the behavior of matter in one of its modes.

The Inferential Staircase

TRIAGE IS A TERM of battlefield medicine. That shell having exploded in the latrine, the tough but caring physician divides the victims into those who will not make it, those who will, and those who might. The fundamental laws of physics were to provide a scheme of things at once materialistic, complete, and simple. By now we know, or at least suspect, that materialism will not make it. And not simply because symbols have been given a say-so in the generation of the universe. Entre nous soit dit, physics is simply riddled with nonmaterial entities: functions, forces, fields, abstract objects of every stripe and kind, waves of probability, the quantum vacuum, entropy, and energies.

There remains completeness and simplicity. Completeness is, of course, crucial, for without completeness, there is no compelling answer to Paley’s question at all. What profit would there be if the laws of physics explained the complexity of plate tectonics, but not the formation of the ribosomes? Absent completeness, may not the universe break apart into separate kingdoms ruled by separate gods, just as, rubbing their oiled and braided beards, the ancient priests foretold? It is a vision that is at issue, in part metaphysical in its expression, and in part religious in its impulse–the fundamental laws of physics functioning in the popular imagination as demiurges, potent and full of creative power. And if they are potent and full of creative power, they had better get on with the full business of creation, leaving piecework to part-time workers.

What remains to be completed for this, the most dramatic of visions to shine forth irrefrangibly, is the construction of the inferential staircase leading from the laws of physics to the world that lies about us, corrupt, partial, fragmented, messy, asymmetrical, but our very own, beloved and irreplaceable.

No one expects the laws of physics by themselves to be controlling. “The most extreme hope for science,” Steven Weinberg admits, “is that we will be able to trace the explanations of all natural phenomena to final laws and historical accidents.” Why not give historical accidents their proper name–chance. The world and everything in it, Weinberg might have written, has come into being by means of the laws of physics and by means of chance.

A premonitory chill may now be felt sweeping the room. “We cannot see,” Richard Feynman wrote in his remarkable lectures on physics, “whether Schrödinger’s equation [the fundamental law of quantum mechanics] contains frogs, musical composers, or morality.”

Cannot see? These are ominous words. Without the seeing, there is no secular or sacred vision, and no inferential staircase. Only a large, damp, unmortgaged claim.

And a claim, moreover, that others have regarded as dubious. “The formation within geological time of a human body,” Kurt Gödel remarked to the logician Hao Wang, “by the laws of physics (or any other laws of similar nature), starting from a random distribution of elementary particles and the field, is as unlikely as the separation by chance of the atmosphere into its components.”

This is a somewhat enigmatic statement. Let me explain. When Gödel spoke of the “field” he meant, no doubt, the quantum field; Schrödinger’s equation is in charge. And by invoking a “random distribution of elementary particles,” Gödel meant to confine the discussion to typical or generic patterns–what might reasonably be expected. Chance, again.

Under the double action of the fundamental laws and chance, Gödel was persuaded, no form of complexity could reasonably be expected to arise. This is not an argument, of course; it functions merely as a claim, although one made with the authority of Gödel’s genius. But it is a claim with a queer prophetic power, anticipating, as it does, a very specific contemporary argument.

“The complexity of living bodies,” Gödel observed, “has to be present either in the material [from which they are derived] or in the laws [governing their formation].” In this, Gödel was simply echoing Paley. Complexity must come from somewhere; it requires an explanation.

And here is the artful, the hidden and subversive point. The laws of physics are simple in that they are short; they function, they can only function, to abbreviate or compress things that display pattern or that are rich in symmetry. They gain no purchase in compressing strings that are at once long and complex–strings of random numbers, for example, the very record in the universe of chance itself. But the nucleic acids and the proteins are precisely such strings. We have no idea how they arose; filled with disturbing manic energy, they seem devoid of pattern. They are what they are. Their appearance by means of chance is impossible; their generation from the simple laws of physics ruled out of court simply because the simple laws of physics are indeed simple.

Gödel wrote well before the structure of the genetic code was completely understood, but time has been his faithful friend (in this as in so much else). The utter complexity of the informational macromolecules is now a given. Using very simple counting arguments, Hubert Yockey has concluded that an ancient protein such as “cytochrome c” could be expected to arise by chance only once in 10^44 trials. The image of an indefatigable but hopelessly muddled universe trying throughout all eternity to create a single biological molecule is very sobering. It is this image that, no doubt, accounted for Francis Crick’s suggestion that life did not originate on earth at all, but was sent here from outer space, a wonderful example of an intellectual operation known generally as fog displacement.

It would seem that in order to preserve the inferential staircase some compromises in the simplicity of the laws of nature might be in prospect. Luck cannot do quite what luck was supposed to do. Roger Penrose has argued on thermodynamic grounds that the universe began in a highly unusual state, one in which entropy was low and organization high. Things have been running down ever since, a proposition for which each of us has overwhelming evidence. This again is to explain the appearance of complexity on the world’s great stage by means of an intellectual shuffle, the argument essentially coming to the claim that no explanation is required if only because the complexity in things was there all along.

It is better to say frankly–it is intellectually more honest–that either simplicity or the inferential staircase must go. If simplicity is to join materialism in Valhalla, how, then, have the laws of physics provided an ultimate answer for Paley’s question?

Master and Mastered

ALTHOUGH PHYSICISTS ARE quite sure that quantum mechanics provides a complete explanation for the nature of the chemical bond (and so for all of chemistry), they are sure of a great many things that may not be so, and they have provided the requisite calculations only for the hydrogen and helium atoms. Quantum mechanics was, of course, created before the advent of the computer. It is a superbly linear theory; its equations admit of exact solution. But beyond the simplest of systems, a computational wilderness arises–mangrove trees, steaming swamps, some horrid thing shambling through the undergrowth, eager to engage the physicists (and the rest of us) in conversation.

Hey, fellas, wait!

The fundamental laws of physics are in control of the fundamental physical objects, giving instructions to the quarks and bossing around the gluons. The mathematician yet rules the elementary quantum world. But beyond their natural domain, the influence of the fundamental laws is transmitted only by interpretation.

As complex systems of particles are studied, equations must be introduced for each particle, the equations, together with their fearful interactions, settled and then solved. There is no hope of doing this analytically; the difficulties tend to accumulate exponentially. Very powerful computers are required. Between the music of Mozart and the fundamental laws of physics, the triple concepts of algorithm, information, and symbol hold sway. The content of the fundamental laws of physics is relative–it must be relative–to the computational systems needed to interpret them.

There is a world of difference between the character of the fundamental laws, on the one hand, and the nature of the computations required to breathe life into them, on the other. The statement that a group of rabbits is roughly doubling its size suggests to a mathematician an underlying law of growth. The rabbits are reproducing themselves exponentially. The law provides a general description of how the rabbits are increasing, one good for all time. It is an analytical statement; it strikes to the heart of things by accounting for what the rabbits are doing by means of a precise mathematical function.

Observing those same rabbits over a certain period of time, the computer tracks their growth in a finite number of steps: There are two rabbits to begin with, and then four and then eight, and so upward to sixty-four, when this imaginary exercise ceases on my say-so. The finite collection of numbers is a simulation, one in which the underlying reality is suggested, but never fully described.

Law and simulation have very different natures, very different properties. The law is infinite, describing the increase in rabbits to the very end of time, and the law, moreover, is continuous, and so a part of the great tradition of mathematical description that is tied ultimately to the calculus.

The simulation is neither infinite nor continuous, but finite and discrete. It provides no deep mathematical function to explain anything in nature. It specifies only a series of numbers, the conversion of those numbers into a pattern a matter for the mathematician to decide.

If the law describes the hidden heart of things, the simulation provides only a series of stylized snapshots, akin really to the succession of old-fashioned stills that New York tabloids used to feature: the woman, her skirt askew, falling from the sixth-floor window, then passing the third floor, a look of alarm on her disorganized features, finally landing on the hood of an automobile, woman and hood both crumpled. The lack of continuity in either scheme makes any interpolation between shots or simulation a calculated conjecture. The snapshots, it is worthwhile to recall, provide no evidence that the woman was falling down, but we see the pictures and we neglect the contribution we make to their interpretation.

Computational schemes figure in any description of the inferential staircase; the triple concepts of algorithm, information, and symbol have made yet another appearance. They are like the sun, which comes up anew each day. And they serve to sound a relentlessly human voice. The expectation among physicists, of course, is that these concepts play merely an ancillary role, the inferential staircase under construction essentially by means of the fundamental laws of physics. But the question of who is to be the master and who the mastered in this business is anything but clear.

To the extent that the fundamental laws of physics function as premises to a grand metaphysical argument, one in which the universe is to appear as its empyrean conclusion, the triple concepts–algorithm, information, symbol–function as additional premises. Without those premises, the laws of physics would sit simply in the shedding sun, mute, inglorious, and unrevealing.

A third revision of Steven Weinberg’s memorable affirmation is now in prospect: The best that can be expected, the most extreme hope for science, is an explanation of all natural phenomena on the basis of the fundamental laws of physics, and chance, and a congeries of computational schemes, algorithms, specialized programming languages, techniques for numerical integration, huge canned programs (such as Mathematica or Maple), computer graphics, interpolation methods, computer-theoretic shortcuts, and the best efforts by mathematicians and physicists to convert the data of simulation into coherent patterns, artfully revealing symmetries and continuous narratives.

A certain sense of radiant simplicity may now be observed dimming itself. The greater the contribution of the triple concepts, the less compelling the vision of an inferential staircase. Matters stand on a destructive dilemma. Without the triple concepts, the fundamental laws of physics are incomplete, but with the triple concepts in place, they are no longer simple. The inferential staircase was to have led from the laws of physics to that sense of intelligence by which complexity is explained; if intelligence is required to construct the very stairs themselves, why bother climbing them? The profoundly simple statements that were to have redeemed the world must do their redeeming by means of the very concepts that they were intended to redeem.

The Gate of Time

THERE SHOULD BE IN THIS no cause for lamentation; the destabilizing relationship between the technology of information and the fundamental laws of physics represents a form of progress, an improvement in our understanding. We have always suspected, and now we know, that while there are things that are simple and things that are complex in nature, the rich variety of things is not derived from anything simple, or if derived from something simple, not derived from them completely.

Complexity may be transferred; it may be shifted from theories to facts and back again to theories. It may be localized in computer derivations, the fundamental laws kept simple, or it may be expressed in a variety of principles and laws at the expense of the idea of completeness. But ultimately, things are as they are for no better reason than they are what they are. We cannot know more without making further assumptions, those assumptions like a dog’s maddening tail–in front of our nose, but forever out of reach.

What is curious, because it seems to have been something that we expected all along, is that it does not seem to matter. It does not matter at all . . .

. . . They have gathered by the gate of time, the quick and the dead and those eager to be born; some come to remember, others to forget. A cool gray fog is blowing. The spidery hands of the great clock measuring millennia instead of minutes are crawling toward midnight. Techno-wizards with pale green eyes have thronged the lounge of paradise. Elvis is there, singing in a sulky baritone. Everything has been forgiven, but nothing has been forgotten. The Ancients tell stories of why time began and how space was curved. There is the sound of chimes, the smell of incense. A woman with flaming cheeks remarks in a calm, clear voice that she has been kidnapped by space aliens. They have probed her every orifice with lights that burn like fire.

David Berlinski

Writer, Thinker, Raconteur, and Senior Fellow, Discovery Institute
David Berlinski received his Ph.D. in philosophy from Princeton University and was later a postdoctoral fellow in mathematics and molecular biology at Columbia University. He is currently a Senior Fellow at Discovery Institute's Center for Science and Culture. Dr. Berlinski has authored works on systems analysis, differential topology, theoretical biology, analytic philosophy, and the philosophy of mathematics, as well as three novels. He has also taught philosophy, mathematics and English at such universities as Stanford, Rutgers, the City University of New York and the Universite de Paris. In addition, he has held research fellowships at the International Institute for Applied Systems Analysis (IIASA) in Austria and the Institut des Hautes Etudes Scientifiques (IHES) in France.