Discovery Institute’s Science Research Program entails a vibrant community of scientists and scholars who are conducting scientific research to investigate the evidence for design in nature, and also research that critically investigates the ability of material mechanisms to account for the complexity of nature. Much of this research is directly funded by Discovery Institute, while other research is conducted by a network of ID-friendly scientists that Discovery Institute actively collaborates with and sustains.
20+ Active Research Projects | 250+ Peer-Reviewed Papers | $10+ million total budget since 2016 |
Intelligent design is a potent scientific theory which makes testable predictions that are being actively investigated by researchers worldwide. The ID 3.0 research program comprises this community of scientists who have collaborated to publish peer-reviewed scientific papers related to the evidence for design in prominent journals, including Nature, ACS Nano, ACS Applied Materials & Interfaces, Nature Nanotechnology, Journal of Bacteriology, Scientific Reports, Frontiers in Microbiology, Frontiers in Genetics, Annual Review of Genomics and Human Genetics, Biosystems, BMC Evolutionary Biology, BMC Genomics, Molecular Biology and Evolution, BIO-Complexity, Springer Proceedings in Mathematics and Statistics, PLOS One, Journal of Theoretical Biology, and a book with Cambridge University Press, among other technical outlets.
ID research has gone through multiple phases, roughly described below.
This first phase of ID research developed basic theories of design detection via information, including concepts like irreducible complexity, specified complexity, and the explanatory filter.
The second phase of ID research began to experimentally apply the methods of design-detection developed in ID 1.0 to real-world systems. The focus was studying protein evolvability, while theorists furthered the positive case for design by showing the superior explanatory ability of design via inferences to the best explanation. ID-oriented labs also emerged such as Biologic Institute and Evolutionary Informatics Lab. The latter made important theoretical progress showing that new complex and specified information can only be produced by intelligent agency.
The third and current phase of ID research extends ID 2.0 to new systems and fields, showing the heuristic value of intelligent design to guide scientific research. This research includes not only testing the origin of new systems, but also using ID to answer questions and make novel contributions in burgeoning fields, such as epigenetics, synthetic biology, systems biology, genomics (e.g., investigating function for junk DNA), systematics and phylogenetics, information theory, population genetics, biological fine-tuning, molecular machines, ontogenetic information, paleontology, quantum cosmology, cosmic fine-tuning, astrobiology, local fine-tuning, and many others.
Under ID 3.0 there is a special emphasis on unexpected features of the genome which reveal new layers of biological information and control. In addition to interpreting pre-existing data within an ID framework, we are generating new data and asking questions that ID prompts — and potentially answers.
As seen in the examples listed above, ID inspires new avenues of scientific research, and ID proponents do scientific research and have published hundreds of peer-reviewed scientific papers relevant to the evidence for design. The breadth and potency of ID research illustrates the fact that it can be divided into two general types — pure and applied.
Having shown that many features of nature were designed, pure ID research allows us to make the reasonable conclusion that design is a useful model for understanding the natural world. This confidence in design theory allows us to then further assume design is prevalent, and based upon this assumption, apply design reasoning to explore new systems in the natural world.
Pure ID research asks how we can detect design, and/or investigates natural systems to determine whether design is the best explanation for the features we observe. Essentially, pure ID research aims to refine and employ design-detection methods to determine if a design inference is warranted for a given system or phenomenon we find in nature. Pure ID research has determined that many aspects of life, planet Earth, the solar system, the galaxy, and the universe display evidence of design. This kind of research might also involve critiquing naturalistic explanations as part of making a case for design.
Applied ID research uses the assumption of design to better understand how natural systems work. A few examples will help illustrate what this means:
Applied ID research is thus making progress to help us better understand the workings of biological systems by applying the assumption of design to our investigations of the operations of nature.
Below we highlight a partial list of ID 3.0 projects, researchers, as well as papers produced by those projects. Some ID 3.0 projects and researchers are not listed to protect the investigators from threats to their careers if it were publicly known they were doing ID-related research. Some papers for some projects are not listed for the same reasons. For a more complete list of peer-reviewed scientific paper supporting intelligent design, see Peer Reviewed Articles Supporting Intelligent Design.
Bacteria have short generation times and large population sizes, making them an ideal test case for the creative power of evolutionary mechanisms. This project, which spans multiple sub-projects, is testing the evolvability of new features in bacteria and other microorganisms through laboratory experiments and digital simulations. One project led by biologists Ann Gauger and Ralph Seelke (late professor of Biology and Earth Sciences at the University of Wisconsin-Superior) broke a gene in the bacterium E. coli required for synthesizing the amino acid tryptophan. When the bacteria’s genome was broken in just one place, random mutations were capable of “fixing” the gene. But when just two mutations were required to restore function, Darwinian evolution became stuck, unable to restore the full function. Another project by Michael Behe reviewed numerous published examples of Darwinian evolution in bacteria and viruses, and found that adaptations at the molecular level “are due to the loss or modification of a pre-existing molecular function,” showing that evolutionary mechanisms are far better at breaking features than building new ones. This principle was confirmed by another ID 3.0 project, published in the Journal of Bacteriology by Dusty van Hofwegen and Scott Minnich, which tested a widely touted bacterial innovation of Richard Lenski’s “Long-Term Evolution Experiment,” and found it actually involved “no new genetic information (novel gene function).” Theoretical simulations confirm the inability of Darwinian mechanisms to produce new features. Douglas Axe has developed a simulation called Stylus, which not only models the evolution of new proteins (see the Protein Zoo project below), but also modeled the “long-term evolution” of a population of 1000 “digital organisms,” and found that over time they experienced “genome decay” — suggesting some factor must be inputting information to allow species to persist and evolve.
Every time your heart beats it pulses blood into your brain. But if these pulses of blood flow aren’t carefully controlled, they would burst the fragile, tightly organized capillaries throughout the brain. Applying the assumption that the brain is a “designed system,” Michael Egnor, a pediatric neurosurgeon and professor in the Department of Neurological Surgery at Stony Brook University, has sought to understand how our physiology allows blood to flow smoothly into and through the brain (Egnor 2019). By carefully measuring blood flow, Egnor and his team have found that brain capillary blood flow is controlled by a band stop filter, and cerebrospinal fluid flow is controlled by a band pass filter — revealing intelligent design in the brain. They have published papers in BIO-Complexity and the Journal of Neurosurgery Pediatrics, with ongoing work analyzing the data to assess the model of brain blood flow control.
One of the largest and most successful ID 3.0 research project focuses on nanomachines and is led by one of the world’s top organic chemists, Prof. James Tour at Rice University, as well as top collaborator Richard Gunasekera, a biologist who has worked with the University of Houston, Rice University, and now Biola University. Tour’s project initially designed miniature nanocars — 50,000 of which could fit on the width of a human hair — as a way of demonstrating new techniques for creating nanomachinery. While designing these machines Tour developed a novel critique of chemical evolutionary theory. Tour showed that designing even relatively simple nanomachines required a complex series of chemical manipulations and a well-choreographed chemical procedure. Since the design of his nanomachines required intelligent intervention at every stage, he argues that the origin of life, and the much more complex molecular machinery required for it, cannot currently be plausibly explained by undirected chemical processes alone.
Tour subsequently applied the techniques he had developed in the design of his nanocars to design another molecular machine — a “nano-augur” or “nanodrill” — that has demonstrated the capacity to destroy malignant cancer cells and antibiotic resistant bacteria. On a parallel track, Professor Richard Gunasekera has been demonstrating the efficacy of these same nanodrills in destroying antibiotic resistant bacteria in his new lab at Biola University where he has recently moved, along with one of Tour’s former post-docs. Tour, Gunasekera, and their postdocs have published many scientific articles on the capabilities of their nanomachines in Nature, American Chemical Society (ACS) Nano, and Nature Nanotechnology Gunasekera and his team are demonstrating the ability of the nanodrills to destroy bacteria and viruses, while Tour and his team will continue to develop revolutionary applications of the nanodrills for the treatment of cancer.
Design detection is fundamental to the theory of intelligent design, and it seeks to understand and identify the logical principles that we humans intuitively use when recognizing something was designed. This largely theoretical project arguably began in 1998 when William Dembski published his foundational peer-reviewed book with Cambridge University Press, The Design Inference: Eliminating Chance Through Small Probabilities,first outlining how concepts such as the explanatory filter and complex and specified information (CSI) can help us to detect design (Dembski, 1998). Dembski’s follow-up work incorporated “No Free Lunch” theorems and developed the Law of Conservation of Information, which hold that blind evolutionary mechanisms can shuffle CSI around, but only intelligence can generate truly novel CSI (Dembski, 2001). According to these principles, information is conserved such that “on average no search outperforms any other” (Dembski and Marks, 2009) — meaning that even Darwinian evolution is really no better than a random search (Dembski et al. 2010; Ewert et al. 2013b). This work further shows that unless “active information” is inputted by an intelligent agent to improve a search, it’s effectively going to perform no better than random guessing. These researchers have applied their methodology to multiple would-be computer simulations of evolution which have been claimed to produce new information via unguided evolutionary mechanisms. In each case, their methodology identified where the programmers smuggled in “active information” to make the simulation program work (Dembski and Marks 2009a; Ewert et al. 2009; Ewert et al. 2010; Ewert et al. 2012b; Ewert et al. 2012a; Ewert et al. 2013a; Ewert 2014). Additional work has developed new ways to measuring information and detect design, such as algorithmic specified complexity as an improved method of quantifying specification, which measures specification as a function of description length (Ewert et al. 2013; Ewert et al. 2015a; Ewert et al. 2015b; Dembski and Ewert 2023). This project’s work is ongoing, but it has already demonstrated strong theoretical grounds to understand why only intelligence can produce new complex and specified information.
Intelligent design is not necessarily incompatible with common ancestry, but it may suggest non-materialistic possibilities could lead to new insights into systematics, the study of how organisms are related. This project is exploring whether dependency graphs can better explain how organisms are related compared to nested hierarchies, which are predicted by common ancestry. Dependency graphs allow the idea of “common design” to be applied to systematics, where organisms share similar traits not necessarily because they were inherited from a common ancestor but because they were designed using similar blueprints. When organisms that are “distantly related” nonetheless share similar parts or genetic modules, common design might be a superior explanation to common descent. Various papers produced by this project are exploring the use of dependency graphs to test the idea of common design against common ancestry.
A major area where evolution interfaces with traditional beliefs is human origins. For example, evolutionists have claimed that human genetic diversity is so great that humanity could never have derived from an initial pair of two individuals, but instead evolved from a population of thousands. They have also claimed that humans are not exceptional and have no biological or cognitive features that distinguish them from other animals. The Human Origins project is testing these claims. A rigorous population genetics model, developed by mathematician Ola Hössjer and geneticist Ann Gauger, was applied to a database of thousands of sequenced human genomes. The results showed that it was indeed possible for humanity to have originated from some original pair. Another aspect of this project is comparing humans and chimps to determine what distinguishes the two species.
The Engineering Research Group (ERG) is a consortium of over 100 engineers and biologists who are working together under the assumption that viewing biological systems are engineered can help us to better understand how biological systems operate. The ERGs goals are (1) apply engineering principles to better understand biological systems, (2) craft a design-based theoretical framework that explains and predicts the behaviors of living systems, and (3) develop research programs that demonstrate the engineering principles at work in living systems. Workgroups and researchers within this project are looking at topics such as Mechanisms of Adaptation, Viral Origins, Modeling Biochemical Pathways and Molecular Machines, Biomimetics, Biological Signaling, and the Origin of Life. One key participant in this project is Bristol University engineering professor Stuart Burgess, who has critically investigated claims of poor-design in the human body and shown the accusations are false. The group hosts a bi-annual Conference on Engineering in Living Systems (CELS) where participants convene to present their ideas and results.
The bacterial flagellum is a prime example of a complex molecular machine which seemingly challenges step-by-step Darwinian models of evolution. The flagellar evolution project seeks to investigate bacterial flagellar proteins to determine if homologues exist in other biological systems. This has the potential to test the co-option argument which claims that flagellar proteins were borrowed or “co-opted” from these other systems. Another important aspect of this project is directly testing for irreducible complexity of the flagellum through genetic knockout experiments. Dustin Van Hofwegen, Assistant Professor of Biology & Biochemistry at University of Northwestern, St. Paul, is doing genetic knockout experiments on the flagellum to determine what genes compose its irreducibly complex core.
Evolutionary scientists have long-claimed that the vast majority of our DNA which does not code for proteins is useless genetic “junk.” Intelligent design theorists, on the other hand, have long-predicted that much of this non-protein-coding DNA likely has important biological functions. This prediction flows naturally out of the fact that intelligent agents typically design things with function and for a purpose. Because of this ID prediction, quite a few ID proponents have been involved in research investigating function for non-protein-coding DNA—what was previously considered “junk.” Many of these scientists are part of our Junk DNA Workgroup, a collaboration of scientists who are seeking function for “junk DNA.” Many of these researchers are in sensitive positions so we do not list their names or publications.
The mind-body workgroup is a team of scientists and scholars brought together by Discovery Institute’s Bradley Center for Natural and Artificial Intelligence who have expertise in issues related to brain function, consciousness, and philosophy of mind. In 2023 they produced a technical volume, Minding the Brain: Models of the Mind, Information, and Empirical Science, edited by philosopher Angus Menuge, computer scientist Robert Marks, and software engineer Brian Krouse. Their ongoing work is evaluating whether the mind can be reduced to the brain or whether consciousness can exist apart from the brain.
Evolutionists had long believed that most genes originated from ancestral genes duplicating and then evolving into new genes with new functions. Genes originating from random DNA sequences was considered too difficult due to the rarity of functional proteins and to other challenges. However, over the past few decades geneticists have identified large numbers of genes which have no discernable similarity to any identified genes outside of a particular genus or even species. These taxonomically restricted or orphan genes can often make up more than 10% of genes in a given species, and they are believed to have originated de novo from random sequences of DNA. This abundance of orphans poses an enormous challenge to undirected evolutionary models, since it implies that large amounts of new genetic information constantly appeared throughout life in very short amounts of time. (Orphan genes are sometimes called ORFan genes, alluding to the term ORF which refers to an “Open Reading Frame” which defines a gene.) A team of scientists and computer programmers headed by Richard Gunasekera and Paul Nelson are developing a web-based tool known as ORFanID which will allow researchers to directly enter new gene sequences and graphically display which other species or higher groups possess the identified gene. This tool will accelerate the identification of orphan genes, and it will help determine their distribution in different animal groups.
If life was designed, could some species be specially designed to provide medical benefits to humans? Richard Gunasekera, Research Professor of Science, Technology and Health at Biola University, has predicted that plants will have special molecules that are designed to fight cancer. Evolution has no reason to expect this, but under a design-based view of biology, it makes perfect sense. He has developed techniques to screen plant samples for the relevant biomolecules and test their efficacy in targeting cancer cell lines. This research is currently in early phases.
This longstanding project aims to assess the evolvability of various types of proteins through experimental and computational research. This foundational ID research was first conducted by Douglas Axe at the Centre for Protein Engineering (CPE) in Cambridge, and published in 2000 and 2004 in the Journal of Molecular Biology. It showed that only 1 in 1077 sequences could yield a stable protein fold that could yield a functional beta-lactamase enzyme. Follow-up research by Axe (2010) performed a population genetics study which found that when a feature requires more than six mutations before giving any benefit, this feature is unlikely to arise in the whole history of the Earth — even in the case of bacteria that have large population sizes and rapid generation times. Additional research by Gauger and Axe (2011) found that merely converting a particular metabolic enzyme to perform the function of a closely related enzyme — the kind of conversion that evolutionists claim can readily happen — would require a minimum of seven mutations. Yet this exceeds the limits of what Darwinian evolution can produce over the Earth’s entire history, as calculated by Axe (2010). A follow-up study by Gauger, Axe, and biologist Mariclair Reeves bolstered this finding by attempting to mutate additional enzymes to perform the function of a closely related protein (Reeves et al. 2014). After inducing all possible single mutations in the enzymes, and many other combinations of mutations, they found that evolving a protein to perform the function of a closely related protein would take over 1015 years — over 100,000 times longer than the age of the Earth. Collectively, this research indicates strong barriers to protein evolution, and that evolving a protein from a similar protein often requires more time (and mutations) than is available. The project is currently headed by Brian Miller—and also includes Ann Gauger, Douglas Axe, Marci Reeves, and Paul Nelson. It is now investigating whether functional sequence rarity entails isolation in sequence space (thereby inaccessible to mutation-selection), and also to catalog the spectrum of broad types of proteins to determine which types might be evolvable by mutation and selection, and which are not. This refers to the “protein zoo” — the idea that there are lots of types of proteins which exist. This project aims to catalog many of these types of proteins and ask whether some are evolvable by natural mechanisms, while others are not.
This team of researchers — including biologists Richard Sternberg and Ann Gauger, mathematician Ola Hössjer, and headed by paleontologist Günter Bechly — is evaluating whether geologically available windows of time can accommodate the waiting times for the required mutations build the complex anatomical features that appear throughout the history of life. In 2018, these investigators published a theoretical mathematical model for making these assessments in Springer Proceedings in Mathematics and Statistics, and in 2021 they further developed their mathematical model in a paper published in Journal of Theoretical Biology. They are now applying their model to various systems.
Below is a non-exhaustive list of select researchers involved in the ID 3.0 research program. (Note: Some researchers involved in ID 3.0 projects must remain anonymous due to the threat of potential harm to their careers.)