William A. Dembski

Board of Directors, Discovery Institute

A noted mathematician and philosopher, William A. Dembski was a founding Senior Fellow with Discovery Institute’s Center for Science and Culture from 1996 until 2016. His most recent book relating to intelligent design is Being as Communion: A Metaphysics of Information (2014).

Dr. Dembski was previously the Phillip E. Johnson Research Professor of Culture and Science at Southern Evangelical Seminary; a Research Professor in Philosophy at Southwestern Seminary, where he directed its Center for Cultural Engagement; the Carl F. H. Henry Professor of Theology and Science at Southern Seminary, where he founded its Center for Theology and Science; and an Associate Research Professor in the Conceptual Foundations of Science at Baylor University, where he headed the first intelligent design think-tank at a major research university: The Michael Polanyi Center.

Dr. Dembski has taught at Northwestern University, the University of Notre Dame, and the University of Dallas. He has done postdoctoral work in mathematics at MIT, in physics at the University of Chicago, and in computer science at Princeton University. Dr. Dembski is a graduate of the University of Illinois at Chicago, where he earned a B.A. in psychology, an M.S. in statistics, and a Ph.D. in philosophy. He also received a doctorate in mathematics from the University of Chicago in 1988 and a master of divinity degree from Princeton Theological Seminary in 1996. He has held National Science Foundation graduate and postdoctoral fellowships.

Dr. Dembski has published articles in mathematics, philosophy, and theology journals and is the author/editor of more than twenty books. In The Design Inference: Eliminating Chance Through Small Probabilities (Cambridge University Press, 1998), he examines the design argument in a post-Darwinian context and analyzes the connections linking chance, probability, and intelligent causation. The sequel to this book, No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence, appeared with Rowman & Littlefield in 2002 and critiques Darwinian and other naturalistic accounts of evolution. Dr. Dembski has edited several influential anthologies, including The Nature of Nature: Examining the Role of Naturalism in Science (ISI, 2011, co-edited with Bruce Gordon), Uncommon Dissent: Intellectuals Who Find Darwinism Unconvincing (ISI, 2004) and Debating Design: From Darwin to DNA (Cambridge University Press, 2004, co-edited with Michael Ruse). His most comprehensive treatment of intelligent design to date, co-authored with Jonathan Wells, is titled The Design of Life: Discovering Signs of Intelligence in Biological Systems.

As interest in intelligent design has grown in the wider culture, Dr. Dembski has assumed the role of public intellectual. In addition to lecturing around the world at colleges and universities, he appears on radio and television. His work has been cited in newspaper and magazine articles, including three front page stories in the New York Times as well as the August 15, 2005 Time magazine cover story on intelligent design. He has appeared on the BBC, NPR (Diane Rehm, etc.), PBS (Inside the Law with Jack Ford; Uncommon Knowledge with Peter Robinson), CSPAN2, CNN, Fox News, ABC Nightline, and The Daily Show with Jon Stewart.


The Design Inference

A landmark of the intelligent design movement, The Design Inference revolutionized our understanding of how we detect intelligent causation. Originally published twenty-five years ago, it has now been revised and expanded into a second edition that greatly sharpens its exploration of design inferences. This new edition tackles questions about design left unanswered by David Hume and Charles Darwin, navigating the Read More ›

When ChatGPT Talks Science

Can AI ever transcend its trained biases?
The other day I received an email keying off my blog post about “ChatGPT and inference to the best explanation” (or IBE). The author mused about the future of ChatGPT4’s knowledge base as it continually grows subject to human-assisted corrections. He speculated on the possibility of future versions, like ChatGPT4 or 5, inferring intelligent design (ID) as the most plausible explanation for the origin of life.  For this to happen, the email writer believes that the AI’s knowledge base would need to incorporate impartial references to ID concepts and their supporting arguments, unless the AI can independently arrive at such a conclusion. He then asks the implications if an AI, designed by humans, determines that a higher intelligence created its human creators. Read More ›

Inferring the Best Explanation Using Artificial Intelligence

With its wealth of information at hand, how well can AI make accurate inferences?
Even with the rise of large language models such as ChatGPT in the last year, I was still convinced that a mode of reasoning known as inference to the best explanation (abbreviated IBE) was not within the competence of artificial intelligence. In fact, I thought it forever beyond the reach of artificial intelligence. I commended Erik Larson’s book The Myth of Artificial Intelligence (reviewed here) for making what in 2021 seemed like an ironclad case that artificial intelligence research was stymied in trying to model and implement inference to the best explanation, or what has also been called abduction (in contrast to deduction and induction).  This week, however, all that has changed for me. I posed a question to ChatGPT4 (hereafter ChatGPT), Read More ›

ChatGPT is Getting More Impressive

Nonetheless, human intelligence remains qualitatively different from artificial intelligence.
A few months back I published two pieces (see here and here) where I was able to break ChatGPT4 (hereafter just ChatGPT). More recently I revisited some of the types of challenges I posed to it in the past that were able back then to break it. It handled the challenges with aplomb. The one challenge it still was unable to handle was finding patterns in numerical sequence, such as what is the next number after 2,4,6,8,… — this one is super easy, but more difficult ones that humans are well able to handle seem still beyond its capabilities, at least for now. Whether that ability can be trained into it remains to be seen, though the improvement I saw suggests that it Read More ›

Moving On to Breaking Google’s Bard

These AI systems lack the uniquely human capacity of self-transcendence
I’ve finally started playing around with Google Bard. With self-referential sentences, it seems even more at sea than ChatGPT. Here is an exchange from May 18: Me: Consider the following six sentences: This is the first sentence. Ignore this sentence. Why are these sentences being written down? The second sentence has exactly three words. The fifth sentence has at least twenty words. This, along with the preceding five sentences, have fewer than twenty times twenty words. Which of these last six sentences has a truth value and which are true? Bard: The last six sentences have the following truth values: The second sentence is false because it has 5 words, not 3. The fifth sentence is true because it has Read More ›

Breaking Google Bard

The fundamental problem with these systems is Goedelian. Kurt Goedel showed that formal systems like this are unable to extract themselves from these systems.

How to Break ChatGPT

It has a difficulty dealing with self-reference
Over the last several months I’ve been playing with ChatGPT, first version 3 and now version 4. It’s impressive and it can answer many questions accurately (though sometimes it just makes stuff up). One problem it has consistently displayed, and which shows that it lacks understanding (that it really is just a big Chinese room in the style of John Searle) is its difficulty dealing with self-reference.  Consider the following exchange that I had with it (on 5/8/23): Me: The fifth sentence does not exist. The second sentence has four words. Ignore this sentence. Is this sentence true? This is the fifth sentence. Which of these last five sentences has a truth value and is in fact true? << In Read More ›

How to Break ChatGPT

One problem it has consistently displayed, and which shows that it lacks understanding, is its difficulty dealing with self-reference.