Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence

Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

Archives

“Ghost Work” and the Enduring Necessity of Human Labor

Contrary to popular assumptions, the greater the automation, the greater the need for human labor.
In the famous sci-fi classic Dune, there are no computers. The only computing beings are humans with drug-accelerated reasoning abilities. Strange for the sci-fi genre, where computers are often front and center. The reason there are no computers is because they have been banned. A great uprising, called the Butlerian Jihad, decided the risk of artificial intelligence was too great. And so, the entire Dune universe decided to ban computers, due to being an existential threat to all humanity. Dune is a prophetic book in many ways, and the Butlerian Jihad is descriptive of our current time, when academics, technocrats and presidents worry about whether the new generative AI could spell the end of humanity. But Dune, like the pundits and leaders, misunderstand AI, and technological

Ancient Greek Philosophy and Modern Blockbuster Graphics

The amazing computer-generated effects you see in almost every blockbuster today are only possible thanks to ideas proposed over 2300 years ago.
Once upon a time, in an online chatroom not so long ago, I was discussing philosophy with a friend. He said that, at the end of the day, philosophy is very speculative. It is hard to know which opinion is true. So, he decided, what is really important is whether an idea is useful. That way at least we get something out of the idea, even if we don’t know whether it is true. Strangely enough, this way of thinking makes ancient philosophy relevant, despite being contrary to modern physics. In particular, it becomes relevant for modern graphics programming. The amazing computer-generated effects you see in almost every blockbuster today are only possible thanks to ideas proposed over 2300 years ago. First Principles In a very old piece of Greek philosophical literature titled

Life According to the Turing Machine

Is there more to the world than just data and digits?
John sat down at the kitchen table for breakfast. He poured himself a big bowl of bit-o-byte flakes and topped it off with a slosh of random milk. After a couple of big crunchy mouthfuls with his Turing spoon to reoptimize his compression ratio, John sat back and sipped at his virtual machine coffee. It was a pleasant morning. The principal components of the digitized sun were just visible above the trie data structure on the mountains in the distance. Thanks to the large rain bandwidth from the night before, the tries were well balanced, throwing off sparkles as the sun’s rays traced through to his viewport. What a wonderful world, he mused, and to think it all came from a dovetail Turing machine, churning away on an infinite stream of binary digits randomly

Does ChatGPT Pass the Creativity Test?

What does ChatGPT have to do in order to be considered creative?
What is creativity? Where does it come from? Why are some things humans do considered creative, while other things mundane? Can AI be creative? To answer these questions, let’s come up with a definition. Creativity at least means something new has been done. No work that copies what has come before is considered creative.  A Creativity Criteria Just doing something new is not enough either. If it were, then I can easily be creative by flipping a coin 100 times. That specific sequence of coin flips will only occur once in the entire history of humanity. But no one would say I was creative when I flipped a coin. This means creativity has to generate a new insight. However, these two criteria are not adequate, either. I could flip a

Can AI Create its Own Information?

The simple answer is "no," but why? Eric Holloway explains
AI is amazing. It is all the rage these days. Companies everywhere are jumping on the AI bandwagon. No one wants to be left behind when true believers are raptured to the mainframe in the sky.What makes the AI work?The AI works because of information it gained from a human generated dataset. Let’s label the dataset D.We can measure the information in the dataset with Shannon entropy. Represent the information with H(D).When we train an AI with this data, we are applying a mathematical function to the dataset. This function is the training algorithm. Labelling the training algorithm T, then we represent training as T(D).The outcome of training is a new AI model. The model generates new data. We represent the generator function as G. When we apply G to T(D), then this is a new

Say What? AI Doesn’t Understand Anything 

Is that supposed to be a cat, Mr. AI?
Whenever I look at AI generated content, whether it be pictures or text, they all have the same flaw. The AI cannot comprehend what it is making.  Let me explain.  When we humans draw a picture, we are drawing a concept. We are drawing something like “cat climbs a tree” or “cowboy riding into the sunset”. It seems like this is what is happening with a picture drawing AI. We give it a prompt, and it draws an associated picture.  On second thought, maybe not…  When AI draws the picture, what is really going on is it is finding individual-colored pixels that correlate with the letters we typed in its massive database stored in the neural network. Very different than how we draw. We sketch a scene, draw general shapes, then fill in the

Minecraft: A World of Information

The world's bestselling video game captures the insight that information is created and consumed by human minds
What if I told you intelligent design theory is responsible for the most successful computer game of all time? This game is Minecraft. It has sold over 238 million copies, the highest selling game of all time.  What makes the game even more extraordinary is it was created entirely by one man, Markus Persson, over a weekend, who then later sold the game to Microsoft for $2.5 billion dollars. Hard to make this sort of thing up. How does Minecraft work? You can think of Minecraft like a computer game form of Legos, the popular building block toy, with added monsters.  You are dropped into an algorithmically generated world where you have to discover resources, find food, and build structures to survive the day night cycle.  At night, the world becomes populated by fearsome

For BitHeaven’s Sake

A satirical short story on the transhumanist quest (and failure) to achieve immortality
Bob and Sue were on their way to church one morning. On their way they ran into their friend Fred. Fred was very wealthy, a billionaire in fact. Fred waved hi. Bob and Sue waved back. They asked Fred to come with them to church.  Fred said no, he had more important things to do. “What is so important,” asked Sue. “I’m off to the real deal,” beamed Fred. Bob looked confused. “Real deal about what?” “You have a fake promise of eternal life. I’m about to get the real thing.” “You can’t be serious. Start talking some sense.” “Seriously. Here’s my voucher, see it right here.” Sue grabbed the piece of paper from Fred and read it aloud. “Good for one digital

AI and Human Text: Indistinct?

Here's a mathematical proof that challenges the assumption that AI and human-made text are the same
What is a poor teacher to do? With AI everywhere, how can he reliably detect when his students are having ChatGPT write their papers for them? To address this concern, a number of AI text detector tools have emerged.  But do they work? A recent paper claims that AI generated text is ultimately indistinguishable from human generated text. They illustrate their claim with a couple experiments that fool AI text detectors by simple variations to AI generated text. Then, the authors go on to mathematically prove their big claim that it is ultimately impossible to tell AI text and human text apart. However, the authors make a crucial assumption. Faulty Premises The proof assumes that AI generated text will become closer and closer to human generated text until the

AI vs. Human Intentionality

If ChatGPT were trained over and over on its own output, it would eventually turn to gibberish
We can do a simple experiment that demonstrates the difference between AI and human intentionality. ChatGPT and the like are a sophisticated form of a mathematical model known as a Markov chain. A Markov chain is based on the Markov assumption that the future is entirely a product of the recent past. In other words, if we know the recent past, then nothing else we learn about the more distant past will improve our ability to predict the future. In ChatGPT terms, this means ChatGPT is based on the assumption that everything we need to know to predict future words is contained within a limited window of previously seen words. ChatGPT’s window was 3,000 words, and I believe the newest version has increased the window tenfold to 30,000 words. The important point

We Can’t Build a Hut to the Moon

The history of AI is a story of a recurring cycle of hype and disappointment
Once upon a time there live a tribe who lived on the plains. They were an adventurous tribe, constantly wanting to explore. At night they would see the moon drift lazily overhead, and became curious. How could they reach the moon? The moon was obviously higher than their huts. Standing on the highest hut no one could reach the moon. At the same time, standing on the hut got them closer to the moon. So, they decided to amass all their resources and build a gigantic hut. Reason being that if standing on a short hut got them closer to the moon, then standing on a gigantic hut would get them even closer. Eventually the tribe ran out of mud and rocks, and though the gigantic hut did get the tribe even closer to the moon, it would still always drift just tantalizingly out

ChatGPT Violates Its Own Model

Based on these exchanges, we can at least say the chatbot is more than just the ChatGPT neural network
Here is a quick overview of how ChatGPT operates under the hood. This will make it easier to spot suspicious behavior. The following is at a very high level.  For the gory details, see the following two guides: – The Illustrated GPT-2 (Visualizing Transformer Language Models) – Jay Alammar – Visualizing machine learning one concept at a time. (jalammar.github.io) – The GPT-3 Architecture, on a Napkin (dugas.ch) What is ChatGPT? Let’s start with what ChatGPT is. ChatGPT is a kind of machine learning algorithm known as a neural network. To understand what a neural network is, recall your algebra classes. You remember being given a set of equations and being told to solve for some variables. Then you learned you could turn the set of equations into a

Blinded by a Defunct Theory

The "interaction problem" is everywhere we look in physics, but the dogma of materialism remains
Materialism. What a weird word. It sounds like a ghost, materializing in front of me. And it is sort of like a ghost, one that has mysteriously taken over the minds of many intelligent people. Because they believe in materialism, these smart people don’t believe in ghosts. Especially the ghost in the machine. The problem is there is no way for the ghost to interact with the machine. This is known as the “mind-body interaction problem”.  The great thing about materialism is at least that theory doesn’t have an interaction problem. Any material thing can interact with any other material thing. Yet there is a deep irony. Let’s explore the idea of materialism to see why. Materialism is the idea that reality only

Found! ChatGPT’s Humans in the Loop!

I am the only writer I’ve been able to discover who is suggesting ChatGPT has humans in the loop. Here is a series of telling excerpts from our last conversation…
The new ChatGPT chatbot has wowed the internet. While students revel in the autogenerated homework assignments, the truly marvelous property of ChatGPT is its very humanlike interactions. When you converse with ChatGPT you could swear there was a human on the other end, if you didn’t know better. For all intents and purposes, ChatGPT has achieved the holy grail of AI and passed the Turing test, on a global scale. Always quick to snatch a deal, Microsoft is currently in talks to spend a mere $10B to acquire half “the lightcone of all future value.” However, things are not always what they seem. Previously, I pointed out aspects of ChatGPT that implied humans were helping craft the chatbot’s responses. Now, I have an explicit admission from the chatbot that the OpenAI team

Is ChatGPT Solely a Neural Network? I Tested That…

Changing the random number test to a "computer easy, human hard" test requires simply that we ask ChatGPT to reverse the random number. It couldn't.
ChatGPT is a direct descendent of GPT-3, and is a fancy form of a fancy machine learning algorithm called a neural network. For an overview of all of ChatGPT’s neural network complexity, here is a fun article. However, all that is beside the point. The important thing about a neural network: It can only generate what is in its training data. Therefore, ChatGPT can only produce what is in its training data. ChatGPT’s training data does not include the conversation you or I are having with ChatGPT. Therefore, if something novel occurs in the conversation, ChatGPT cannot reproduce it. That is, if ChatGPT is a neural network. Conversely, if ChatGPT reproduces novel text from the conversation, then ipso facto ChatGPT is not a neural network. And it is

Yes, ChatGPT Is Sentient — Because It’s Really Humans in the Loop

ChatGPT itself told me there could be humans crafting its input. My tests indicate that that’s likely true
OpenAI, recently released a new AI program called ChatGPT. It left the internet gobsmacked, though some were skeptical, and concerned about its abilities. Particularly about ChatGPT writing students’ homework for them! also appears to be ominously good at answering the types of open-ended analytical questions that frequently appear on school assignments. (Many educators have predicted that ChatGPT, and tools like it, will spell the end of homework and take-home exams.) Kevin Roose, “The Brilliance and Weirdness of ChatGPT” at New York Times (December 5, 2022) The really amazing thing is ChatGPT’s humanlike responses. They gives an observer an unnerving suspicion that the AI is actually sentient. Maybe it is actually sentient. Wait, what? You heard

CAPTCHA: How Fooling Machines Is Different From Fooling Humans

Automated censorship is intended to protect against a tidal wave of spam but it could certainly have other uses…
Readers of Mind Matters News have likely heard of the iconic Turing test. Computer pioneer Alan Turing famously invented a test to determine whether a program could pass as a human. The gist is, if a program can fool human testers into believing it is a human, then the program is intelligent. Not everyone is convinced. Thing is, it doesn’t take much to fool us humans! Take Eliza , a program of only a few hundred lines, written in the 60s, which fooled many people into believing it was a real human therapist. But what if we flip the Turing test on its head? Instead of a test where a program tries to pass as human, we use a test that a program cannot pass, but a human can. For example, consider the CAPTCHA test we encounter on many websites. The term “CAPTCHA”stands for

AI Art Is Not “AI-Generated Art.” It is Engineer-Generated Art

The computers aren’t taking over the art world. The engineers are. Just the way engineers have taken over the music world with modern electronic music
Creativity is a mysterious thing. Our world economy is powered by creativity, yet despite the best efforts of our best engineers, creativity has not been captured by a machine. Until recently. With the new school of AI things have changed. We now have GPT-3 that can digress at length about any topic you give it. Even more remarkable, we have the likes of Dall-E, Midjourney, and Stable Diffusion. These phenomenal AI algorithms have scaled the peak of human creativity. AI can now create art that has never been seen before: The new artistic AI has become so successful the image social networks have become flooded with their artwork. Some communities have even banned the AI art. But the AI art is fun, and imaginative! One enterprising individual even won a fine art

How We Know the Mind Is About Information, Not Matter or Energy

The computer program’s world is one of binary 0 or 1 decisions but the physical world is one of many different shades of more or less
It’s really hard to picture the “mind,” isn’t it? You might think of wavy ghosts, or a spectral light. But nothing very definite. The brain, on the other hand, is very easy to visualize. Images and videos are just a Google away. That’s why it’s easy to assume that our brains are the entities that do our thinking for us. The brain is not only easy to image, it is physical. We can (in theory) touch it. Poke it. The brain even runs off electricity, just like your computer. But what makes a computer run Windows? It isn’t just the transistors on silicon wafers. It isn’t just the electricity coursing through the circuits. Windows itself is a ghostly being, like our mind. It is the structure of electrical signals in your computer. But the electrical signals

How AI Neural Networks Show That the Mind Is Not the Brain

A series of simple diagrams shows that, while AI learns faster than the human brain, the human mind tackles problems that stump AI
Recently, I’ve been arguing (here and here, for example) that we can use artificial neural networks (ANNs) to prove that the mind is not the brain. To recap, here is the logic of my argument: Premise A: neural networks can learn better than the brainPremise B: the human mind can learn better than a neural networkConclusion: the human mind can learn better than the brain, therefore it is not the brain This means if we can conclusively show the human mind can learn better than a neural network, then the mind is not the brain. For Premise A, I’ve argued that the differentiable neural network is a superior learning model compared to the brain neuron’s “all or nothing principle”. The neural network has a “hot” or “cold” signal that it can learn from iteratively,