Gary N. Smith

Senior Fellow, Walter Bradley Center for Natural and Artificial Intelligence

Gary N. Smith is the emeritus Fletcher Jones Professor of Economics at Pomona College. He is the author of more than 100 research papers and 20 books, including The AI Delusion, (Oxford University Press, 2018; translated into Korean, Vietnamese, Chinese for sale in Taiwan, and simplified Chinese for sale in mainland China), The 9 Pitfalls of Data Science, with Jay Cordes (Oxford University Press, 2019; winner of the 2020 PROSE Award for Popular Science and Popular Mathematics), The Phantom Pattern Problem: The Mirage of Big Data, with Jay Cordes (Oxford University Press, 2020), Distrust: Big Data, Data-Torturing, and the Assault on Science (Oxford University Press,  2023), and Standard Deviations: The truth about flawed statistics, AI and big data (Duckworth, 2024)

Archives

Computer Intelligence Versus Human Intelligence

Professor Joseph Weizenbaum created a chatbot he named ELIZA that conversed with users the way a psychotherapist might
Even though the users knew they were interacting with a computer, many were convinced that the program had human-like intelligence and emotions and they happily shared their deepest feelings and most closely held secrets.

The Core Problem with Large Language Models

LLMs are inherently unreliable, which means that failures are not incidental, easily fixable glitches
The data deluge exponentially increases the number of coincidental, useless statistical patterns — so the probability of useful patterns approaches zero.

Is OpenAI Approaching the Valley of Death?

Overpromising is still a big problem; in any event, the fate of Netscape looms
OpenAI needs to show that ChatGPT is more than just the first publicly available LLM. It has not done that and maybe never will.

Home Ownership: Madness Over 30-Year Mortgages

It is tempting to put borrowing and investment decisions in separate mental buckets, but they are intimately related
When considering a mortgage, the correct comparison is not total payments but the return on the borrowed money versus the loan’s annual percentage rate.

Illusions: No, Large Language Models Do Not Understand

A recent New Yorker article is mistaken about this. For one thing, the LLMs have trouble distinguishing between causation and mere correlation
A real-world example of such struggles is the poor performance, in general, of AI-powered mutual funds.

GPT 5.0 Doesn’t Understand But Is Eager to Please

Over a number of tries, it couldn’t get the labels on an illustration right because it does not understand what the words mean and how they relate to the image
The test also shows GPT 5.0’s inclination to praise a user’s acuity, whether the user’s comment is correct or incorrect, intelligent of dumb.

What Kind of a “PhD-level Expert” Is ChatGPT 5.0? I Tested It.

The responses to my three prompts made clear that GPT 5.0, far from being the expert that CEO Sam Altman claims, can’t address the meanings of words or concepts
In an era where politicians, celebrities, and businesses can get away with blatant untruths with little or no consequence, will the same be true of LLMs?

AI Is a Long Way From Replacing Software Coders

Despite C-suite claims, they are not likely to take our jobs any time soon because they do not understand what words mean and how words relate to the physical world
LLMs excel at simple coding tasks but are still too unreliable to use without extensive human supervision on complex tasks where mistakes are expensive.

Sci Foo Unconference: Horseshoe Crabs, Alchemy and (Of Course) AI

It’s called an “unconference” because attendees do not present papers in pre-organized sessions; they propose topics at the venue and those with most interest are selected
Many Sci Foo attendees were excited by LLMs in education but I fear that — when the way they are used is considered — they will increase social inequality.

No, Large Reasoning Models Do Not Reason

Large reasoning models continue the Large Language Model detour away from artificial general intelligence
It is increasingly recognized that, without extensive post-training, their authoritative answers are often remarkably bad.

LLMs Are Bad at Good Things, Good at Bad Things

LLMs may well become smarter than humans in the near future but not because these chatbots are becoming more intelligent
As people become attached to and dependent on their AI friends, they become less interested in their fellow humans.