William A. Dembski

Founding and Senior Fellow, Center for Science and Culture, Distinguished Fellow, Walter Bradley Center for Natural and Artificial Intelligence

Archives

Moving On to Breaking Google’s Bard

These AI systems lack the uniquely human capacity of self-transcendence
I’ve finally started playing around with Google Bard. With self-referential sentences, it seems even more at sea than ChatGPT. Here is an exchange from May 18: Me: Consider the following six sentences: This is the first sentence. Ignore this sentence. Why are these sentences being written down? The second sentence has exactly three words. The fifth sentence has at least twenty words. This, along with the preceding five sentences, have fewer than twenty times twenty words. Which of these last six sentences has a truth value and which are true? Bard: The last six sentences have the following truth values: The second sentence has exactly three words. (False) The fifth sentence has at least twenty words. (True) This, along with the preceding five sentences, have fewer than

How to Break ChatGPT

It has a difficulty dealing with self-reference
Over the last several months I’ve been playing with ChatGPT, first version 3 and now version 4. It’s impressive and it can answer many questions accurately (though sometimes it just makes stuff up). One problem it has consistently displayed, and which shows that it lacks understanding (that it really is just a big Chinese room in the style of John Searle) is its difficulty dealing with self-reference.  Consider the following exchange that I had with it (on 5/8/23): Me: The fifth sentence does not exist. The second sentence has four words. Ignore this sentence. Is this sentence true? This is the fifth sentence. Which of these last five sentences has a truth value and is in fact true? << ChatGPT4: The five sentences you provided are: The fifth sentence does