Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence

Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

Archives

How Could Intelligent Design Help Us In a Conflict?

Well, what would happen if Daffy Duck teams up with Marvin the Martian?
In war, the goal is to eliminate a threat as quickly as possible, given available resources. We try to hit the center of a target with the fewest arrows. A key mathematical concept of intelligent design, active information, captures this dilemma. It also helps us understand the role of artificial intelligence might play in wartime. Active information Active information is the difference between two sources of information. Picture archers shooting at a target: 1. Endogenous information: What is the size of the target? And how difficult is the target to hit? It is the difference between hitting a squirrel and hitting the moon. 2. Exogenous information: How skilled the archer is who is firing the arrows? What difference does the archer’s skill make? Let’s say we have a

How Is Intentionality Embedded in the Universe?

All efforts to extinguish intentionality and morality only serve to further establish their inescapable reality
The conclusion we must reach by examining our own intentionality carefully is that it has an ultimate origin from a conscious being outside of our world.

Could Our Minds Be Bigger Than Even a Multiverse?

The relationship between information, entropy, and probability suggests startling possibilities. If you find the math hard, a face-in-the-clouds illustration works too
This article has equal entropy to gibberish letters of the same length, yet it contains information and gibberish does not. Much follows from that fact.

Is Your Mind Bigger Than the Universe? Well, Look At It This Way…

Surprisingly, there is a way to measure the mind that shows it IS bigger than the universe — information
Imagine you’re sitting at home, relaxing in your favorite easy chair. Go on, kick your legs up. Feel your limbs releasing the stress of the day, starting from the extremities, and progressing up your core to your head. Now, let your mind expand. Let go of what is holding your mind down. Feel it become free, outside of everything around it. Let the feeling continue until your mind is bigger than the universe. Now consider the question: if your mind is bigger than the universe, can it be within the universe? If a ball is bigger than a bag, can it be contained by the bag? Of course not. If the mind is bigger than the universe, then it must be outside of the universe. Of course, a daydream in the easy chair is proof of nothing. Plus, how can we measure the mind? We can’t poke or

Why Is Theology the Most Important Empirical Science?

Arguing pro or con about the existence of God has resulted in many successful and/or widely accepted theories in science
If generating testable theories in empirical science is the standard of success, theology has certainly succeeded, as the record will show.

Can AI Really Start Doing Evil Stuff All By Itself?

We need to first talk to the man in the mirror before we go around blaming transistor circuit boards for what’s wrong in the world
Far from being independent superintellects, AIs — without human input — are subject to Model Collapse, where they start reprocessing information into nonsense.

Can There Really Be an Ultimate Happiness Machine?

Technology can do so much. Can it really provide an answer to the eternal human quest for happiness?
What if we had a machine that could access and manipulate the internals of the human mind? It would fall victim to the halting problem.

AI and the Chinese Room Argument

We still haven't cracked the mystery of human intelligence.
80 years later, we are using the same paradigm, with much faster computers and vast data but we still haven't cracked the mystery of human intelligence.

Ancient Greek Philosophy and Modern Blockbuster Graphics

The amazing computer-generated effects you see in almost every blockbuster today are only possible thanks to ideas proposed over 2300 years ago.
Ancient philosophy can be extremely useful, and entertaining, even when contrary to modern science! See it in the movies!

Life According to the Turing Machine

Is there more to the world than just data and digits?
"Virtually just like real life!" as advertisements proclaimed. All it did was give John a headache, and feeling he'd just wasted a couple of hours of his life.

Does ChatGPT Pass the Creativity Test?

What does ChatGPT have to do in order to be considered creative?
What is creativity? Where does it come from? Why are some things humans do considered creative, while other things mundane? Can AI be creative? To answer these questions, let’s come up with a definition. Creativity at least means something new has been done. No work that copies what has come before is considered creative.  A Creativity Criteria Just doing something new is not enough either. If it were, then I can easily be creative by flipping a coin 100 times. That specific sequence of coin flips will only occur once in the entire history of humanity. But no one would say I was creative when I flipped a coin. This means creativity has to generate a new insight. However, these two criteria are not adequate, either. I could flip a

Can AI Create its Own Information?

The simple answer is "no," but why? Eric Holloway explains
AI is amazing. It is all the rage these days. Companies everywhere are jumping on the AI bandwagon. No one wants to be left behind when true believers are raptured to the mainframe in the sky.What makes the AI work?The AI works because of information it gained from a human generated dataset. Let’s label the dataset D.We can measure the information in the dataset with Shannon entropy. Represent the information with H(D).When we train an AI with this data, we are applying a mathematical function to the dataset. This function is the training algorithm. Labelling the training algorithm T, then we represent training as T(D).The outcome of training is a new AI model. The model generates new data. We represent the generator function as G. When we apply G to T(D), then this is a new

Say What? AI Doesn’t Understand Anything 

Is that supposed to be a cat, Mr. AI?
Whenever I look at AI generated content, whether it be pictures or text, they all have the same flaw. The AI cannot comprehend what it is making.  Let me explain.  When we humans draw a picture, we are drawing a concept. We are drawing something like “cat climbs a tree” or “cowboy riding into the sunset”. It seems like this is what is happening with a picture drawing AI. We give it a prompt, and it draws an associated picture.  On second thought, maybe not…  When AI draws the picture, what is really going on is it is finding individual-colored pixels that correlate with the letters we typed in its massive database stored in the neural network. Very different than how we draw. We sketch a scene, draw general shapes, then fill in the

Minecraft: A World of Information

The world's bestselling video game captures the insight that information is created and consumed by human minds
What if I told you intelligent design theory is responsible for the most successful computer game of all time? This game is Minecraft. It has sold over 238 million copies, the highest selling game of all time.  What makes the game even more extraordinary is it was created entirely by one man, Markus Persson, over a weekend, who then later sold the game to Microsoft for $2.5 billion dollars. Hard to make this sort of thing up. How does Minecraft work? You can think of Minecraft like a computer game form of Legos, the popular building block toy, with added monsters.  You are dropped into an algorithmically generated world where you have to discover resources, find food, and build structures to survive the day night cycle.  At night, the world becomes populated by fearsome

For BitHeaven’s Sake

A satirical short story on the transhumanist quest (and failure) to achieve immortality
Bob and Sue were on their way to church one morning. On their way they ran into their friend Fred. Fred was very wealthy, a billionaire in fact. Fred waved hi. Bob and Sue waved back. They asked Fred to come with them to church.  Fred said no, he had more important things to do. “What is so important,” asked Sue. “I’m off to the real deal,” beamed Fred. Bob looked confused. “Real deal about what?” “You have a fake promise of eternal life. I’m about to get the real thing.” “You can’t be serious. Start talking some sense.” “Seriously. Here’s my voucher, see it right here.” Sue grabbed the piece of paper from Fred and read it aloud. “Good for one digital

AI and Human Text: Indistinct?

Here's a mathematical proof that challenges the assumption that AI and human-made text are the same
What is a poor teacher to do? With AI everywhere, how can he reliably detect when his students are having ChatGPT write their papers for them? To address this concern, a number of AI text detector tools have emerged.  But do they work? A recent paper claims that AI generated text is ultimately indistinguishable from human generated text. They illustrate their claim with a couple experiments that fool AI text detectors by simple variations to AI generated text. Then, the authors go on to mathematically prove their big claim that it is ultimately impossible to tell AI text and human text apart. However, the authors make a crucial assumption. Faulty Premises The proof assumes that AI generated text will become closer and closer to human generated text until the

AI vs. Human Intentionality

If ChatGPT were trained over and over on its own output, it would eventually turn to gibberish
We can do a simple experiment that demonstrates the difference between AI and human intentionality. ChatGPT and the like are a sophisticated form of a mathematical model known as a Markov chain. A Markov chain is based on the Markov assumption that the future is entirely a product of the recent past. In other words, if we know the recent past, then nothing else we learn about the more distant past will improve our ability to predict the future. In ChatGPT terms, this means ChatGPT is based on the assumption that everything we need to know to predict future words is contained within a limited window of previously seen words. ChatGPT’s window was 3,000 words, and I believe the newest version has increased the window tenfold to 30,000 words. The important point

We Can’t Build a Hut to the Moon

The history of AI is a story of a recurring cycle of hype and disappointment
Once upon a time there live a tribe who lived on the plains. They were an adventurous tribe, constantly wanting to explore. At night they would see the moon drift lazily overhead, and became curious. How could they reach the moon? The moon was obviously higher than their huts. Standing on the highest hut no one could reach the moon. At the same time, standing on the hut got them closer to the moon. So, they decided to amass all their resources and build a gigantic hut. Reason being that if standing on a short hut got them closer to the moon, then standing on a gigantic hut would get them even closer. Eventually the tribe ran out of mud and rocks, and though the gigantic hut did get the tribe even closer to the moon, it would still always drift just tantalizingly out

ChatGPT Violates Its Own Model

Based on these exchanges, we can at least say the chatbot is more than just the ChatGPT neural network
Here is a quick overview of how ChatGPT operates under the hood. This will make it easier to spot suspicious behavior. The following is at a very high level.  For the gory details, see the following two guides: – The Illustrated GPT-2 (Visualizing Transformer Language Models) – Jay Alammar – Visualizing machine learning one concept at a time. (jalammar.github.io) – The GPT-3 Architecture, on a Napkin (dugas.ch) What is ChatGPT? Let’s start with what ChatGPT is. ChatGPT is a kind of machine learning algorithm known as a neural network. To understand what a neural network is, recall your algebra classes. You remember being given a set of equations and being told to solve for some variables. Then you learned you could turn the set of equations into a