Darwin Devolves
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A Response to My Lehigh Colleagues

Originally published at Evolution News

Recently in the journal Evolution, two of my colleagues in the Lehigh University Department of Biological Sciences published a seven-page critical review of Darwin Devolves. As I’ll show below, it pretty much completely misses the mark. Nonetheless, it is a good illustration of how sincere-yet-perplexed professional evolutionary biologists view the data, as well as how they see opposition to their views, and so it is a possible opening to mutual understanding. This is the first of a three-part reply.

I’d like to begin by enthusiastically affirming that the co-authors of the review, Greg Lang and Amber Rice, are terrific young scientists. Greg’s research is on the experimental laboratory evolution of yeast, and he’s an associate editor at the Journal of Molecular Evolution. Amber studies the evolutionary effects of the hybridization of two species of chickadee, and she’s an associate editor for Evolution. Not surprisingly, the review is well written and the authors have done a lot of homework, not only reading the book itself but also digging into other material I have written and relevant literature. What’s more, Greg and Amber are both salt-of-the-earth folks, cheerful, friendly, and great colleagues. The additional Lehigh people they cite in the Acknowledgements section share all those qualities. There is no reason for anyone to take any of the remarks in their review as anything other than their best honest professional opinions of the matter. So let’s get to the substance of the review.

“Two Critical Errors of Logic”

After introductory remarks, Lang and Rice begin by deferring to the review of my book in Scienceand a follow-up web post to show that the book contains “a few factual errors and many errors of omission.” (I along with others have dealt at length with those already; see here, here, here, here, here, and here) Instead, in their own review they focus on what they see as two logical errors of the book: 1) that I wrongly equate “the prevalence of loss of function mutations to the inevitable degradation of biological systems and the impossibility of evolution to produce novelty”; and 2) that I wrongly confuse proteins with machines, and use that misguided metaphor to mislead readers. I’ll take those two points and their many subparts mostly in turn.

They begin with logical error #1 by deriding the First Rule of Adaptive Evolution as a “quality sound bite” that is “simplistic and untruthful to the data.” Recall that the First Rule states, “Break or blunt any functional gene whose loss would increase the number of a species’ offspring.” Also recall that I explained, in both the book and the journal article where it was first published, that it is called a “rule” in the sense of being a rule of thumb, not an unbreakable law, and it is called the “first” rule because that is what we should generally expect to happen first to help a species adapt, simply because there are many more ways to break a gene than to build a new constructive feature.

As you might imagine, I have read the Evolution review closely. Yet nowhere do the authors even try to show why the First Rule isn’t a correct statement. They point to mutations that are not degradative, but fail to show quantitatively that those other types will arise faster than degradative ones. In fact, the other types are expected to be orders of magnitude slower.

The reviewers agree that the First Rule is fine for explaining many results from the experimental evolution of microbes such as bacteria and yeast, but they balk at extending it beyond the lab. In fact, they actively argue that lab results really can’t tell us much about the real world: “No deletion is beneficial in all environments and beneficial loss of function mutations that arise in experimental evolution are unlikely to succeed if, say, cells are required to mate , the static environment is disturbed, or glucose is temporarily depleted.” All of those situations, of course, will be common outside a laboratory.

One big fly in their argument, however, is that they overlook the results from non-laboratory evolution that I give in the book. Every species that has been examined in sufficient detail so far shows the same pattern as seen in lab results. For example, I open the book with a discussion of polar bear evolution. About two-thirds to three-quarters of the most highly selected genes that separated the polar bear from the brown bear are estimated by computer methods to have experienced mutations that were functionally damaging. (Some other reviewers questioned this. I showed why they are mistaken here.) Similar results were seen for the woolly mammoth. Neither of those species evolved in the laboratory. Except for the sickle mutation (which itself is a desperate remedy), all mutations selected in the wild in humans for resistance to malaria are degradative. Dog breed evolution, which has been touted as a great stand-in for selection in the wild, is mostly degradative, and lots of breeds have health problems

What’s more, we might well ask, if it doesn’t mimic the world realistically, why do federal funding agencies award grants to those who study laboratory evolution? Lang and Rice aver that it does indeed give lots of helpful information:

Collectively, experimental evolution has yielded new insights into the tempo of genotypic and phenotypic adaptation, the role of historical contingency in the evolution of new traits, second order selection on mutator alleles, the power of sex to combine favorable (and purge deleterious) mutations, the dynamics of adaptation, and the seemingly unlimited potential of adaptive evolution.

(Those phrases are press-release fodder. As I have shown in numerous posts, the results are much more modest than the headlines make them sound; see here, here, here, here, and here.) But, as the reviewers themselves insist, those results are all based on an unnatural situation — on the prevalence of degradative mutations in artificial environments — so why should we trust that the results reflect what would happen in nature? How can the reviewers with any consistency accept some of the lab results but not others?

It astounds me to see how quickly lab evolution researchers disavow the importance of their own life’s work when some outsider draws an unwelcome inference. But perhaps we can still save the day. Maybe all of the researchers’ results point to some important lessons about unguided evolution. In fact, there’s no reason to think that many lab evolution experiments are different in relevant ways from how nature behaves.

Lab Reflects Nature

The first objection Lang and Rice raise against extrapolating results from lab evolution studies to evolution in the wild is that the environments are critically different:

[L]oss of function mutations are expected to contribute disproportionately to adaptation in experimental evolution, where selective pressures are high and conditions are constant, or nearly so.

That sounds a little off. After all, selective pressures in raw nature can be pretty stringent, too, if, for example, 85 percent of a vertebrate species died in a single year due to altered weather conditions. And, as we saw above, species in a complex changing natural environment, such as the polar bear and humans, show evolutionary behavior similar to that seen in laboratories. What’s more, the conditions in lab evolution are generally far from simple or “constant.” That’s because by far the greatest complexity in any organism’s environment is not due to the temperature or solution conditions in which it finds itself. Rather it is due to the presence of other organisms, including others of its own species. If any individual suddenly gains a selective advantage — even in otherwise quite constant environmental conditions — then its progeny have a splendid chance to outcompete the progeny of all other organisms.

Let’s look at several examples from the best known lab evolution experiment — that of Richard Lenski, who has been growing E. coli for over thirty years to observe how it adapts. I’ll begin with the best known mutation of that long study — the development of mutant bacteria that could eat citrate in the presence of oxygen, which the ancestor strain could not do. I’ll skip over the molecular details here (I’ve commented elsewhere) and concentrate on the bottom line. Even though the environment had been constant, overnight the citrate mutant strain outgrew its brethren and took over the flask.

The initial citrate mutation was not degradative; rather, it involved a rearrangement of the bacterium’s DNA. Nonetheless, soon after that initial non-degradative change, several additional mutations occurred in other genes in support of the citrate utilization pathway. All appear to have broken their respective genes. Thus, as the First Rule of Adaptive Evolution would lead one to expect, a helpful, non-degradative mutation that took tens of thousands of generations to appear was quickly fine-tuned by mutations that broke several genes.

Let me stress that the genes that were broken following the initial citrate mutation had been helpful to the bacterium up to that point. They were apparently doing useful tasks. However, once the citrate mutation came along, the environment changed and they became a net burden, so out they went. Thus even useful genes, when circumstances change, will easily be tossed overboard by random mutation and natural selection to maximize the net benefit of even a non-degradative change.

We can derive another important lesson from the story of the citrate mutation. At the beginning of the E. coli evolution project, the starting bacteria were genetically pretty uniform (except for marker genes and such), because they came from a pure strain. (That is indeed one source of real constancy that the reviewers may have had in mind.) The bacteria then diverged from each other mostly by degradative mutations, because those were the quickest beneficial changes to hand in the new environment in which they found themselves. Yet the aftermath of the initial citrate mutation shows the same behavior. That is, the mutant rapidly took over the flask, yielding a new pure strain, and the new strain further adapted to its new environment by beneficial degradative evolution. We should expect the same behavior after selection on any gene in any species. Any non-neutral change in any organism’s genome represents a de facto new environment and, as the First Rule states, will tend strongly to be fine-tuned by the most rapidly occurring beneficial mutations. Of course, degradative mutations occur most rapidly.

(The only expected exception to this situation would be if no genes are available that can helpfully be degraded. An example may be the development of chloroquine resistance by the malaria parasite Plasmodium falciparum, which occurred mainly by multiple point mutations in the PfCRT protein.) 

Mutating a Mutator

A second example of fine-tuning by degradation in the Lenski experiment can be seen in the rise and slight fall of mutator strains — that is, bacteria that have lost much of their ability to repair their DNA. It transpires that, from the beginning of the E. coli lab evolution project, Lenski separately grew a dozen different test flasks of bacteria, in order to be able to ask questions about the replicability of evolution. Six out of the twelve replicate strains eventually became mutators (because a gene involved in DNA repair broke), with mutation rates more than a hundred times those of non-mutators. There is some question about whether those loss-of-function mutations helped the bacteria directly or by making other beneficial mutations appear faster. (That’s what the reviewers are referring to above as “second order selection on mutator alleles.”)

Whatever the resolution of that second-order question, one mutator led to a first order effect. The Lenski lab noticed that, after a while, the mutation rate of one of the mutator strains had decreased by half. Upon investigation they determined that the mutation rate had been reduced by breaking a second gene that is involved in DNA repair. Thus a problem caused by breaking one gene was partially offset by breaking a different gene. That’s what random mutation and natural selection do.

Let me emphasize that, like the genes broken to fine-tune the citrate mutation, the second gene involved in repair had been useful. It was performing a beneficial function. It was not superfluous. Nonetheless, since the environment changed with the appearance of the mutator mutation, the net benefit of getting rid of the gene apparently outweighed the benefit of keeping it. So out it went. The bacterium is now better adapted to its current environment, but certainly less flexible than it had been.

A laboratory is not nature, but we do lab experiments to understand how nature behaves. Lab evolution experiments show that whenever the environment changes, microorganisms will adjust with whatever helpful mutations come along first. Both simple math and relevant experiments indicate that by far those will be degradative mutations.


Lehigh

Recently two of my Lehigh University Department of Biological Sciences colleagues published a seven-page critical review of Darwin Devolves in the journal Evolution. As I’ll show below, it pretty much completely misses the mark. Nonetheless, it is a good illustration of how sincere-yet-perplexed professional evolutionary biologists view the data, as well as how they see opposition to their views, and so it is a possible opening to mutual understanding. This is the second of a three-part reply.

A Limited Accounting of Degradation

Greg Lang and Amber Rice cite a number of articles to show that loss-of-function mutations are just a small minority of those found in studies of organisms.

However, the truth is that loss of function mutations account for only a small fraction of natural genetic variation. In humans only ∼3.5% of exonic and splice site variants (57,137 out of 1,639,223) are putatively loss of function, and a survey of 42 yeast strains found that only 242 of the nearly 6000 genes contain putative loss of function variants. Compared to the vast majority of natural genetic variants, loss of function variants have a much lower allele frequency distribution.

Yet those three studies they cite all search only for mutations that are pretty much guaranteed to totally kill a gene or protein. For example, one paper says

We adopted a definition for LoF variants expected to correlate with complete loss of function of the affected transcripts: stop codon-introducing (nonsense) or splice site-disrupting single-nucleotide variants (SNVs), insertion/deletion (indel) variants predicted to disrupt a transcript’s reading frame, or larger deletions …

That’s akin to counting only burnt-out shells of wrecked cars as examples of accidents that degrade an auto, while ignoring fender benders, flat tires, and so on. There are many more mutations that would not be picked up by the researchers’ methods that nonetheless would be expected to seriously degrade or even destroy the function of a protein. Since the rates leading to the kinds of mutations in the cited papers are likely to be at least ten-fold lower than general point mutations in the gene (which, again, the study passed over) there may be many more genes — perhaps five- to ten-fold more (about a quarter to a half of mutated genes) — that have been degraded or even functionally destroyed. Further research is needed to say for sure. (I know which way I’ll bet.) The remaining fraction of mutated genes in the population is likely to consist mostly of selectively neutral changes, neither helping nor hurting the organism, and not contributing anything in themselves to the fitness of the species.

Replenishing the Gene Store

The reviewers then point to work showing that, while some genes are indeed degraded over the life of a species, new genes arise by duplication or horizontal gene transfer to replenish the supply. Thus there is a continuous supply of raw material for new evolution. But there are at least three serious problems they overlook. First, assuming a generation time of one year, the rate of duplication of any particular gene is estimated to be about one per ten million years per organism (although there is much uncertainty) and, although it is frequent in prokaryotes, in eukaryotes the rate of horizontal gene transfer is much less. The rate of any particular gene suffering a degradative mutation is expected to be about a hundred times faster than duplication. Thus every gene that could help by being degraded would have an average of 100 chances to do so for every one chance another gene would have that could help by duplicating. Second, as its name indicates, gene duplication yields just an extra copy of a gene, with the same properties as the parent gene. Thus the extra copy would have to twiddle its thumbs for another expected ten million years or so — all the while trying to dodge inactivating mutations — before acquiring a second mutation that might differentiate it a little bit in a positive way from the first — exactly like an unduplicated gene. 

In their review Lang and Rice write that perhaps the very fact that there were two copies of a particular gene would itself be helpful, because of the extra activity it would add to the cell. I agree that is possible. However, it is special pleading, because most duplicated genes would not be expected to behave that way. For every extra restriction put on the gene that is supposed to duplicate (such as partial duplication, duplication that joins it to another gene, and so on), a careful study of the topic must adjust the mutation rate downward, because fewer genes/events are expected to meet those extra restrictions.

The third and most serious problem Lang and Rice overlook is that they assume without argument that a duplicated gene would be able to integrate into an organism’s biology strictly by Darwinian (or at least unintelligent) processes. Yet not all genes or functions are the same, so critical distinctions must be made. As I have written in detail in Chapter 8 of Darwin Devolves and in response to other reviewers, some genes with simpler duties may have been able to do so but others not. For example, currently duplicated genes for proteins called opsins are involved in color vision of humans. Yet all those proteins do pretty much the same thing, so duplication of an opsin gene would not be expected to disrupt an organisms’ current biology too much. On the other hand, duplicated developmental genes, such as those for Hox proteins, would be expected to have a much more difficult time of it; they would be much more likely to cause birth defects than to help.

Since the question we are discussing is not about simple common descent, but rather about whether such fantastic development as we see in life could be produced with or without intelligent guidance, then a proponent of Darwinian evolution has to show that chance could fold in genes for even the most difficult pathways, if the question is not to be begged. No one has ever even tried to show that.

A Fourth Nasty Problem

One of the papers the reviewers reference in this section is Shen et al. (2018). (16) If you look up that paper you find that two of the four “Highlights” listed on the first page are that “Reconstruction of 45 metabolic traits infers complex budding yeast common ancestor” and “Reductive evolution of traits and genes is a major mode of evolutionary diversification.” It must take Darwinian tunnel vision to cite a paper that emphasizes how a complex ancestor gave rise to simpler yeast species by losing abilities over time as support for arguing that Darwinian evolution can build complexity.

The Shen et al. (2018) results point strongly to a fourth nasty problem with the notion of gene duplication as replacement for older, degraded genes. On average, degradation would be expected to remove variety in the kinds of genes, whereas even successful duplication and integration of a new gene just increases a pre-existing gene type. Over time that will diminish gene diversity.

The Shen et al. (2018) paper isn’t alone in noticing the phenomenon of genome reduction. As sequencing data becomes more plentiful and accurate, more papers are being published that show the importance of loss-of-function from more-complex states in evolution (see here and here). As one group writes of mammalian development, “Our results suggest that gene loss is an evolutionary mechanism for adaptation that may be more widespread than previously anticipated. Hence, investigating gene losses has great potential to reveal the genomic basis underlying macroevolutionary changes.” Another group comments, “These findings are consistent with the ‘less-is-more’ hypothesis, which argues that the loss of functional elements underlies critical aspects of human evolution.” 

Standing Variation

Lang and Rice ding me for disrespecting standing variation. Standing variation consists of the mutant genes that are already present in a population and can be called upon by natural selection to help a species adapt to changed environmental circumstances, obviating the need for a new mutation. For example, the most highly selected mutant gene associated with thick- versus thin-beaked Galápagos finches did not first arise when Peter and Rosemary Grant were studying the finches in the 1970s. It actually arose about a million years ago and has been present in the group ever since. Ancient standing variation also seems to be behind the very rapid evolution of cichlid fish in Lake Victoria. I discuss both of those examples in Darwin Devolves. The reviewers write, “this does not lessen the instrumental role of standing genetic variation in adaptation to new environments.”

I heartily agree, and never wrote otherwise. There are two major problems, however, for the reviewers’ position. The first is that evolution by natural selection of standing variation does not address the primordial question that my book focused on — how complex structures arise, particularly at the molecular level. The second major problem is that standing variation nicely illustrates how preexisting slapdash mutations actively inhibit more complex ones. That is, rapid beneficial degradative mutations can become standing variation.

For example, that mutant protein that is most strongly associated with thin- versus thick-beak genes in Darwin’s finches, ALX1, has only two changed amino acid residues out of 326 compared to the wild type protein. Both of those are predicted by computer analysis to be damaging to the protein’s function.Yet apparently no better solution to the task of changing finch beak shape has come along in a million years, even though an enormous number of mutations would be expected to occur in the bird population during that time.

Why not? Well, consider that an army platoon that takes an unoccupied hill has a much easier task than an opposing force that later wants to displace them. Similarly, a likely big factor in finch evolution is that the quick and dirty mutations have already been established. So in order to supplant them a new mutation would have to be better right away than the fixed ones. That is, its selection coefficient compared to mutation-free ALX1 would have to be greater than the damaging ones. There is no known correlation, however, between the strength of the selection coefficient and whether a mutation is constructive or degradative. Thus we have no reason to think standing variation would be supplanted.

Recognizing that hurdle could go a long way toward understanding the reason for stasis in evolution or, put another way, the reason for the equilibrium in punctuated equilibrium. And the generality of punctuated equilibrium reminds us that the same situation — quick and dirty mutations either stalling or completely preventing constructive ones — is expected to be very frequent on Darwinian principles.

Ecological Changes

Lang and Rice emphasize the importance of the ecological diversification and behavioral changes of Darwin’s finches, as opposed to just changes in body shape.

Darwin’s finches are an icon of evolution for good reason, having radiated into numerous ecological niches and developed diverse resource specializations (including at least one case — feeding on mature leaves — that is, to the best of our knowledge, unknown in other bird orders, much less families). By adopting a restrictive definition of fundamental biological change, Behe dismisses all corresponding behavioral, digestive, and physiological adaptations.

Species limits and relationships of the Galápagos finches remain uncertain. Yet the massive study by Lamichhaney et al. (2015), in which the complete genomes of 120 Galápagos finches were sequenced (over 100 billion nucleotides), including representatives of every separate species and population, found that the most highly selected finch gene was ALX1, which, again, is associated with thick versus thin beaks. If those alterations of the finches’ behavioral and feeding habits required genetic changes, they eluded discovery. Perhaps the ecological changes are mostly the result of nongenetic modifications.

The authors of the review point to the example of the evolution of stickleback fish in freshwater lakes that have reduced defensive armored plates compared to saltwater varieties:

The causative variants are likely cis-regulatory changes that decreased expression of [the gene] Eda in developing armor, but not in other tissues. Darwin Devolves accepts as evidence only de novo protein evolution, a restriction Behe uses to support his “First Rule” and claim that “Darwinian evolution is self limiting.”

They have misunderstood the First Rule. There’s nothing in “Break or blunt any functional gene” that confines degradative mutations just to protein coding regions. If it would benefit a species to reduce the activity of a gene by messing up its control elements instead of its coded protein sequence, that works too. The first mutation that comes along to helpfully suppress a gene’s activity is the one with the best chance of being established in a population.

The very next sentence the reviewers write is this: “Narrow by definition and unsupported by the data, Behe’s First Rule does not stand up to scrutiny.” On the contrary, the scrutiny itself doesn’t stand up.


Lehigh

Recently two of my Lehigh University Department of Biological Sciences colleagues published a seven-page critical review of Darwin Devolves in the journal Evolution. As I’ll show below, it pretty much completely misses the mark. Nonetheless, it is a good illustration of how sincere-yet-perplexed professional evolutionary biologists view the data, as well as how they see opposition to their views, and so it is a possible opening to mutual understanding. This is the third of a three-part reply. It continues directly from Part 2. See here for Part 1.

Of Course Proteins Are Machines

A basic difference between the views of Greg Lang and Amber Rice and my own concerns the nature of the molecular foundation of life. They object that I consider many biochemical systems to be actual machines. They quote a line from Darwin Devolves stating that protein systems are “literal machines — molecular trucks, pumps, scanners, and more.” They write disapprovingly that the book claims “rod cells are fiber optic cables … The planthopper’s hind legs are a ‘large, in your face, interacting gear system.’” They do concede that I didn’t make up those claims about the machine-like nature of the systems out of whole cloth: “Most of the analogies in Darwin Devolves are not Behe’s creation — he has done well to scour press coverage and the scientific literature for relatable metaphors; and he is generous with their use.” Nonetheless, they say, “reality remains: proteins are not machines, a flagellum is not an outboard motor.”

On this point they are simply wrong. “Molecular machine” is no metaphor; it is an accurate description. Unless Lang and Rice are arguing obliquely for some sort of vitalism — where the matter of life is somehow different from nonliving matter — then of course proteins and systems such as the bacterial flagellum are machinery. What else could they be? Although they aren’t made of metal or plastic like our everyday tools, protein systems consist of atoms of carbon, oxygen, nitrogen, and so on — the same kinds of atoms as are found in inorganic matter, nothing special.

A dictionary definition of “machine” is “an assembly of interconnected components arranged to transmit or modify force in order to perform useful work.” Take a look for yourself here at the gears of the planthopper, here at the fiber-optic cells of the retina, and here at the bacterial flagellum. Do you think they fit that dictionary definition? Just like arms, legs, and jaws at the macro-level of life, all of which are organized to perform tasks and work by mechanical forces, so too the molecular foundation of life. Biologists routinely use the phrase “molecular machine” (just do a search of PubMed or Google Scholar), and have done so for a long time. For example, consider from 1997, “The ATP Synthase — a Splendid Molecular Machine” and from 1999, “The 26S Proteasome: A Molecular Machine Designed for Controlled Proteolysis.” Organic chemists have also long used the term, albeit for much more modest assemblages than are found in life, even if they did win the 2016 Nobel prize in Chemistry. As the Royal Swedish Academy of Sciences announced then: 

The Nobel Prize in Chemistry 2016 is awarded to Jean-Pierre Sauvage, Sir J. Fraser Stoddart and Bernard L. Feringa for their design and production of molecular machines. They have developed molecules with controllable movements, which can perform a task when energy is added.

Lang and Rice do not argue that proteins are not machines. Rather, they simply declare “proteins are proteins, and not machines,” list a few things some proteins do, and assume that makes it obvious they can’t be machines: “Proteins are promiscuous. They moonlight, by chance interacting with other cellular components to effect phenotype outside their traditionally ascribed roles.” Well, now. So can a nut or bolt be “promiscuous” by, say, holding together various kinds of machines? Can a mousetrap “moonlight” as a tie clip? What exactly is it about those features they list that contradicts the dictionary definition of a machine? Or contradicts the evidence of your own eyes when viewing the images of protein machinery linked above? — Nothing at all.

Hand-Waving at Irreducible Complexity

Lang and Rice use their misunderstanding of molecular machinery as a basis for attacking irreducible complexity: “By acknowledging the reality that proteins are proteins, and not machines, we immediately recognize the shortcomings of irreducible complexity.” How so? They quote my definition of IC as “a single system composed of several well matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning.” They then object that:

The concept of irreducible complexity is flawed for two reasons. First, it considers a system only in its current state and assumes that complex interdependency has always existed. Second, irreducible complexity does not consider that proteins perform multiple functions and, therefore, evolutionary paths that seem unlikely when considering only one function may be realized through a series of stepwise improvements on another function.

They are wrong on both counts. There is nothing in the definition of IC that requires their conclusions. Irreducible complexity does focus on the current state of a system, but it does not assume that “complex interdependency has always existed.” Rather, it strongly implies (although does not absolutely rule) that the complex interdependency did not arise by Darwinian processes — that it required intelligent input to produce. IC also does not require that proteins do not perform multiple functions. In fact, in 1996 in Darwin’s Black Box,I pointed out several that do, and showed why that does not help at all in explaining IC. 

On top of those mix-ups, the reviewers then forget the definition of irreducible complexity as “a single system” that they stated just in the previous paragraph and begin to write of it in terms of “essential genes.”

Simply because a system in its current form is irreducibly complex is not evidence that it did not evolve by random mutation and natural selection. Essentiality of a gene or protein is relative to its current state. For two closely related strains of yeast, between 1% and 5% of genes that are essential in one strain are dispensable in the other. Conditional essentiality is not simply due to the presence of second copy (or a close paralog) of the gene in one strain but not the other; rather, conditional essentiality is a complex trait involving two or more modifying loci

All of that may be true and interesting, but it is beside the point. A gene that is essential for the growth of an organism is not at all the same thing as a component of a single system that is required for the system’s function. For example, the motor is a necessary component of the irreducibly complex bacterial flagellum — in its absence, the molecular machine cannot work. But it is not necessary for the growth of E. coli in Richard Lenski’s laboratory evolution project. On the other hand, hemoglobin may be essential for human life, but it is not part of a single irreducibly complex system. Cells and organisms are composed of many molecular machines and biochemical pathways; they do not constitute a “single system” in the sense of the definition of irreducible complexity. I have written explaining such confusions long ago. It’s little wonder that the reviewers don’t appreciate the problem IC presents to undirected evolution since they misunderstand the concept itself.

Lang and Rice are welcome to think up all the evolutionary pathways they want, and join the long line of critics who have tried and (as I briefly show in Darwin Devolves and at much greater length elsewhere; see here, here, here, here, here, here, and here) failed over the years to figure out how irreducibly complex functional systems could be produced by random, undirected processes. Yet they don’t even try. The penultimate section of the long review is entitled “Two Examples to Illustrate the Evolution of Complexity.” (Hmm. Do you, dear reader, notice that an important word is missing from that title?) The reviewers dredge up a couple of old experiments that, at the time they were first published, were furiously spun as grave problems for irreducible complexity. (One sure way to get an otherwise-unremarkable paper noticed over the past several decades has been to claim it refutes those bothersome ID folks.) However, the results were actually quite modest and the relevance to ID claims nonexistent, as I wrote at the time. For those who want the details, click on the following links. Suffice it to say here that the first example cited by the reviewers (an investigation of a complex molecular machine called a vacuolar ATPase) at the very best concerned sideways evolution — no new functions, let alone new complex machinery, were involved. With the second example, a virus in a lab evolution experiment swapped out a binding site for a certain protein in the membrane of E. coli for a binding site for a second, homologous protein. In the process several E. coli genes were broken to help the bacterium survive.

Color me unimpressed. Yet it’s reasonable to think that, in preparation for writing the review, Lang and Rice would search for the very best studies that have so far been produced that they think challenge irreducible complexity and intelligent design. Perhaps they found them.

Toward Mutual Understanding

In the Conclusion of their review Lang and Rice raise a plaintive cry, first celebrating in general the skepticism of scientists and then bemoaning the skepticism of the public toward grand Darwinian claims.

Scientists — by nature or by training — are skeptics. Even the most time honored theories are reevaluated as new data come to light. … 

[O]ver 150 years after On the Origin of Species — less than 20% of Americans accept that humans evolved by natural and unguided processes. It is hard to think of any other discipline where mainstream acceptance of its core paradigm is more at odds with the scientific consensus. …

Why evolution by natural selection is difficult for so many to accept is beyond the scope of this review; however, it is not for a lack of evidence: the data (only some of which we present here) are more than sufficient to convince any open minded skeptic that unguided evolution is capable of generating complex systems.

Perhaps I can help. After all, I used to believe that a Darwinian process did indeed build the wonders of life; I had no particular animus against it. Yet I believed it on the say-so of my instructors and the authority of science, not on hard evidence. When I read a book criticizing Darwin’s theory from an agnostic viewpoint it startled me, and I then began a literature search for real evidence that random mutation and natural selection could really do what was claimed for them. I came up completely empty. In the over thirty years since then, I’ve only become more convinced of the inadequacy of Darwinism, and more persuaded of the need for intelligent design at ever-deeper levels of biology, as detailed in my books.

Clearly Greg and Amber honestly disagree. How to explain that? To help answer, let’s first consider a different scientific discipline — physics. The history of physics offers powerful lessons that widespread agreement on even the most basic ideas in a field is no guarantee that there is sufficient evidence to support the theory, or indeed that there is any evidence for it at all. Just ask James Clerk Maxwell, who wrote the article “Æther” in 1878 for the ninth edition of the Encyclopedia Britannica

Whatever difficulties we may have in forming a consistent idea of the constitution of the æther, there can be no doubtthat the interplanetary and interstellar spaces are not empty, but are occupied by a material substance or body, which is certainlythe largest, and probably the most uniform body of which we have any knowledge. [Emphasis added.]

Maxwell, one of the greatest physicists of all time, calculated the density — to three significant figures — of the æther, a substance that doesn’t exist. If that doesn’t make the case for the peril of over-reliance on theory — and the need for profound scientific humility — nothing will.

But surely no branch of contemporary science could go so far astray, could it? — Maybe. In the past few years a theoretical particle physicist named Sabine Hossenfelder has made a splash by criticizing the reliance of other theoreticians on a gauzy concept of “beauty” to guide their calculations. She thinks that pretty much the whole field has been barking up the wrong tree for thirty years. Last summer she released a book on the topic, Lost in Math, which was favorably reviewed in Nature. She also maintains a blog, BackReaction, and holds forth regularly and entertainingly. Recently she put up a typically insightful, acerbic post, “Particle physicists excited over discovery of nothing in particular.” The first reader to comment at the site wrote sympathetically: “I believe it’s hard for anyone on the inside of a tribe to see the limitations of their own thinking. One has to step outside of the protection ring of orthodoxy.”

Respect the Views of the Public

Precisely. It’s hardly news that a group can share strong views on topics of mutual interest to its members, which many on the outside find less than compelling. Theoretical particle physicists, lawyers, members of the military, union members, business people, clergy, and on and on. It would be hard to find a group that didn’t have such shared views. Of course, that includes evolutionary biologists and scientists in general. I would like to delicately suggest that a large chunk of the disconnect (although certainly not the only factor) between the public and biologists over evolution is that, as a rule, biologists share a commitment to Darwin’s theory that the general public does not. That shared commitment leads biologists (and scientists in general) to require substantially less evidence to persuade them of the theory’s verity and scope than someone outside the tribe.

Contra Lang and Rice, it’s preposterous to say that the data “are more than sufficient to convince any open minded skeptic that unguided evolution is capable of generating complex systems.” Unless one defines a skeptic of Darwin’s theory (the most prominent proposed “unguided” explanation) as closed-minded, a quick visit to the library will disabuse one of that notion. (See here, here, here, here, here, here, and here.) Even in their own review, at best the authors argue that they see no obstacle to Darwinian processes producing functional complex systems; they surely don’t demonstrate that it can. And of all the relevant literature in books and journals, the two papers they pointed to as examples of the power of Darwin’s mechanism are quite modest indeed. When my first book, Darwin’s Black Box, was published in 1996 it elicited comments by bona fide evolutionary biologists such as: “there are presently no detailed Darwinian accounts of the evolution of any biochemical system, only a variety of wishful speculations,” and “There is no doubt that the pathways described by Behe are dauntingly complex, and their evolution will be hard to unravel…. We may forever be unable to envisage the first proto-pathways.” It’s hard to reconcile such statements with an assertion that the data are “more than sufficient.”

As quoted earlier, Lang and Rice write: “Scientists — by nature or by training — are skeptics. Even the most time honored theories are reevaluated as new data come to light.” That claim wouldn’t survive even a short trip through the history of science, which is of course replete with people (that’s another name for scientists) fighting tooth and nail to defend their ideas. Few scientists are as emotionless as Mr. Spock, and to maintain otherwise is little more than group-flattery. More to the point, scientists do not have a corner on the market for skepticism. In all walks of life that trait has its uses. A banker evaluating a loan, a voter listening to a politician’s speech, a teacher wondering whether the dog really did eat this student’s homework, a judge considering whether a defendant is indeed remorseful, an historian evaluating the direction of her academic field — pretty much everyone is skeptical when they smell a rat. And “pretty much everyone” is another way to say “the public.”

For what it’s worth, my advice on this matter for Lang, Rice, and others with similar views is to respect the opinions of the public, even if one disagrees with them and thinks them ill-founded, because, when it comes to the grand claims for Darwin’s theory, many folks think they smell a rat and are prudently exercising their skepticism. Indeed, instead of blaming the public, they should consider the possibility that perhaps the evidence for the vast scope of Darwin’s theory really isn’t as strong as biologists over the years have been telling each other.

Michael J. Behe

Senior Fellow, Center for Science and Culture
Michael J. Behe is Professor of Biological Sciences at Lehigh University in Pennsylvania and a Senior Fellow at Discovery Institute’s Center for Science and Culture. He received his Ph.D. in Biochemistry from the University of Pennsylvania in 1978. Behe's current research involves delineation of design and natural selection in protein structures. In his career he has authored over 40 technical papers and three books, Darwin Devolves: The New Science About DNA that Challenges Evolution, Darwin’s Black Box: The Biochemical Challenge to Evolution, and The Edge of Evolution: The Search for the Limits of Darwinism, which argue that living system at the molecular level are best explained as being the result of deliberate intelligent design.