Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence

Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

Archives

Why GPT-3 Can’t Understand Anything

Without long-term memory, human conversation becomes impossible
There is a mathematical reason why machine learning systems like GPT-3 are incapable of understanding. The reason comes down to the fact that machine learning has no memory. It is just probabilistic associations. If there is only a 10% chance of going off topic, then after just seven exchanges there is a greater than 50% chance the machine learning model has gone off topic. The problem is that when prediction is just based on probabilities, the likelihood of making a misprediction increases exponentially. A long-term memory is needed in order to maintain long-term coherence. GPT-3 is essentially a sophisticated Markov process. What is important about the Markov process is that the next step in the process is only dependent on…

AI Companies Are Massively Faking the Loophole in the Turing Test

I propose the Turing Test be further strengthened by presuming a chatbot is human until proven otherwise
Computer pioneer Alan Turing was posed the question, how do we know if an AI has human like intelligence? He offered his famous Turing test: If human judges cannot differentiate the AI from a human, then it has human-like intelligence. His test has spawned a number of competitions in which participants try to fool judges into thinking that a chatbot is really a human. One of the best-known chatbots was Eugene Goostman, which fooled the judges into thinking it was a 13-year-old boy — mostly by indirection and other distraction techniques to avoid the sort of in-depth questioning that shows that a chatbot lacks understanding. However, there is a loophole in this test. Can you spot the loophole? What better…

Does Information Weigh Something After All? What If It Does?

At the rate we create information today, one physicist computes that in 350 years, the energy will outweigh the atoms of Earth
In the 1960s, IBM researcher Rolf Landauer (1927–1999) observed that if the logical information in a computational system decreased, then the physical entropy in the system must increase (Landauer’s Principle). This conclusion follows from the principle that the entropy in a closed system can never decrease. A decrease in the logical information corresponds to a decrease in entropy. And factoring in the principle that the entropy cannot actually decrease, the physical system itself must increase in entropy when the information decreases. This increase in entropy will result in the emission of heat, and a reduction of energy in the system. Now Melvin Vopson, a physicist at the University of Portsmouth, has taken Landauer’s principle to the next logical step. He…

Soylent AI is…people!

OpenAI advertises itself as AI-powered, but at the end of the day, the system is human-powered
In the sci-fi movie, “Soylent Green,” the big reveal is that a food called soylent green is actually made from human beings, the catchphrase being “soylent green is people.” Likewise, as I discovered from a recent exchange with OpenAI’s GPT-3, “soylent AI is people.” GPT-3 is the product of AI company OpenAI. The company made headlines in 2019 with the claim that their AI model was too dangerous to publicly release. OpenAI is not a mere research company. While their publicly stated goal is fairly modest – “Aligning AI systems with human intent” – their CEO Sam Altman has bigger plans. He left his very successful role as president of Y Combinator, one of Silicon Valley’s most successful venture capital…

Dawkins’ Dubious Double Weasel and the Combinatorial Cataclysm

Dawkins has successfully reduced a combinatorial explosion to a manageable problem...or has he?
In Richard Dawkins’ book, The Blind Watchmaker, he proposed a famous (and infamous) computer program to demonstrate the power of cumulative selection, known as the “Weasel program.” The program demonstrates that by varying a single letter at a time, it is possible to rapidly evolve a coherent English sentence from a string of gibberish. The way the program works is as follows: First, a sequence of characters is randomly assembled by drawing from the 26 English letters and the space. Then, one character is randomly reassigned. The resulting sequence is compared to the phrase from Hamlet, a quote uttered by Polonius: “methinks it is like a weasel.” For every character that matches, a point is scored. If the new sequence…

Can Computers –- and People — Learn To Think From the Bottom Up?

That’s the big promise made in a recent article at Aeon
Tufts University biologist Michael Levin and Columbia University neuroscientist Rafael Yuste have an ambitious project in hand: To explain how evolution “‘hacked’ its way to intelligence from the bottom up,” that is, from nothing. They base their thesis on computer science: This is intelligence in action: the ability to reach a particular goal or solve a problem by undertaking new steps in the face of changing circumstances. It’s evident not just in intelligent people and mammals and birds and cephalopods, but also cells and tissues, individual neurons and networks of neurons, viruses, ribosomes and RNA fragments, down to motor proteins and molecular networks. Across all these scales, living things solve problems and achieve goals by flexibly navigating different spaces –…

Is AlphaZero Actually Superior to the Human Mind?

Comparing AI and the human mind is completely apples and oranges
The Google-backed AI company DeepMind made headlines in March 2016 when its AlphaGo game AI engine was able to defeat Lee Sedol, one of the top Go players in the world. DeepMind followed up this great achievement with the AlphaZero engine in 2017, which made the remarkable achievement of soundly beating AlphaGo in Go as well as one of the world’s best chess engines in chess. The interesting difference between AlphaGo and AlphaZero is that AlphaGo uses databases of top human games for learning, while AlphaZero only learns by playing against itself. Using the same AI engine to dominate two different games, while also discarding reliance on human games suggests that DeepMind has found an algorithm that is intrinsically superior…

“Slightly” Conscious Computers Could Doom Atheism

That might sound surprising but let’s follow the logic of the “consciousness” claim through to its inevitable conclusion
Recently, Ilya Sutskever, co-founder of OpenAI, proposed that artificial intelligence (AI) may currently be “slightly” conscious. His claim was probably in reference to the GPT-3 AI that can generate text from a prompt. I’ve played with a couple of the linguistic neural networks a bit, and you can try them out here. Some of the output is quirky, which could be mistaken for personality and make the algorithm appear conscious. The algorithm also generates emotional statements, that can generate empathy in a human user of the system. Just as kids make believe their dolls are alive when they develop an emotional bond with their toy, the algorithm text generates empathy in the human user. It can make us feel a…

Chalmers and Penrose Clash Over “Conscious Computers”

Philosopher Chalmers thinks computers could be conscious but physicist Penrose says no
Two authors I’ve been reading recently are Roger Penrose and David Chalmers. Penrose is a physics Nobel laureate who has stoked controversy by claiming in The Emperor’s New Mind: Concerning Computers, Minds and The Laws of Physics (1989) that the mind can do things beyond the ability of computers. Chalmers is a philosopher of science who claims in The Conscious Mind: In Search of a Fundamental Theory (1997) that consciousness cannot be reduced to physical processes. Both thinkers are well respected in their fields, even though they articulate positions that imply that the mind’s operation is beyond current science. At the same time, they believe that there is a way to see the mind as part of nature (that is,…

Are the Brain Cells in a Dish That Learned Pong Conscious?

Human-derived organoids learned faster than AI and always outperformed mouse-derived organoids in terms of volley length, raising troubling questions
Recently, science media were abuzz with a remarkable story about minibrains (mouse and human brain cells in a dish) learning to play the video game Pong: Scientists have successfully taught a collection of human brain cells in a petri dish how to play the video game “Pong” — kind of. Researchers at the biotechnology startup Cortical Labs have created “mini-brains“ consisting of 800,000 to one million living human brain cells in a petri dish, New Scientist reports. The cells are placed on top of a microelectrode array that analyzes the neural activity. “We think it’s fair to call them cyborg brains,” Brett Kagan, chief scientific officer at Cortical Labs and research lead of the project, told New Scientist. Tony Tran,…

What Darwinism Fails to Explain about Business Enterprise

On today’s ID the Future, host Jay Richards talks with Eric Holloway about his recent Mind Matters article, “Can Darwinian Theory Explain the Rise and Fall of Businesses?” Why would anyone think Darwinian theory could explain business ups and downs? Holloway explains, and also notes that there’s an entire sub-discipline, organizational ecology, dedicated to studying business from a Darwinian framework. Richards, who has published on Darwinism, design, economics, and entrepreneurship himself, also weighs in. Darwinism sees business as survival of the fittest, with natural selection playing an obvious role, but where do the businesses and the innovations come from in the first place? Here is where Darwinism really founders as a tool for understanding business and entrepreneurship, says Holloway. It’s Read More ›