Gary N. Smith

Senior Fellow, Walter Bradley Center for Natural and Artificial Intelligence

Gary N. Smith is the Fletcher Jones Professor of Economics at Pomona College. His research on stock market anomalies, statistical fallacies, the misuse of data, and the limitations of AI has been widely cited. He is the author of more than 100 research papers and 18 books, most recently, Standard Deviations: The truth about flawed statistics, AI and big data, Duckworth, 2024.

Archives

Yes, the AI Stock Bubble Is a Bubble

It's unfolding the way a financial bubble typically does
Unlike the Internet, LLMs are not useful enough to prompt customers to pay prices that reflect the hundreds of billions of dollars needed to develop them.

Why LLMs Are Not Boosting Productivity

If LLMs were as reliably useful as economist Tyler Cowen alleges, businesses would be using them to generate profits faster than LLMs generate text. They aren’t.
So far, AI is dragging down economic growth by diverting so much human talent and natural resources away from more productive uses.

Intelligence Requires More Than Following Instructions

Post-training improves the accuracy and usefulness of LLMs but does not make them intelligent in any meaningful sense — as the Monty Hall problem shows
The danger is not that computers are smarter than us but that we think they are and thus trust them to make decisions they should not be trusted to make.

Some Lessons From DeepSeek, Compared With Other Chatbots

I tested OpenAI o1, Copilot, and Gemini Flash, along with DeepSeek, on a question about Tic-Tac-Toe
Here’s the problem with the use case for LLMs: If you know the answer, you don’t need to ask an LLM; if you don’t know the answer, you can’t trust one.

Sloppy Science is a Statistical Sin

Evidence of sloppy science encourages readers to wonder if the entire research project is compromised
Journals should require authors to make their data available online before publication and encourage others to try to replicate the research they publish.

The Hype and Limitations of Generative AI

On this episode, host Robert J. Marks concludes his conversation with economics professor and author Gary Smith about the hype and limitations of generative AI. Smith is the Fletcher Jones Professor of Economics at Ponoma College and a frequent contributor to Mind Matters News. In this portion of the conversation, Smith and Marks explore the hype around artificial general intelligence (AGI) and explain how current large language models lack true reasoning and creative capabilities, despite regular claims of impending AGI from people like OpenAI’s Sam Altman. Smith provides examples demonstrating how these models make nonsensical or incorrect responses to logical problems and financial questions, highlighting their inability to understand context and perform meaningful reasoning.

Large Language Models (LLMs) Flunk Word Game Connections

Despite hype, ChatGPT and its competitors, in all their iterations, are still just text-generators based on statistical patterns in the text databases they train on
Identification of statistical patterns in text that Large Language Models (LLMs) do not understand is not going to give us AGI, let alone superintelligence.

The Promise of Artificial General Intelligence is Evaporating

Revenue from corporate adoption of AI continues to disappoint and, so far, pales in comparison to the revenue that sustained the dot-com bubble — until it didn’t
Recognition is growing that fundamental challenges make LLMs unreliable. Increasingly expensive scaling will likely hasten the popping of the AI bubble.

The World Series of Coin Flips

Here we go again with the annual coin-flipping ritual known as the World Series
With the Yankees and Dodgers so evenly matched, this World Series will be somewhat like a coin-flipping contest. That's the paradox of luck and skill.

P-Hacking: The Perils of Presidential Election Models

History professor Alan Lichtman’s model uses 13 true/false questions reflecting likely voter interests. But some of them seem rather subjective
Lichtman’s Thirteen Keys to the White House prediction model for the presidency shows some evidence of p-hacking, as does Helmut Norpoth’s Primary Model.

Presidential Pundits—a P-Hacking Parable

In politics, as elsewhere, too many studies flop when other researchers attempt to replicate them with fresh data
Some prediction models were developed by well-intentioned researchers before the perils of p-hacking were clearly understood, hence the failures.

Bad Luck Seldom Persists — But it Never Guarantees Good Luck

Many people embrace the fallacious law of averages in their daily lives when "regression toward the mean" is a more realistic picture
For example, the baseball player with the highest batting average in any season generally does not do as well the season before or the season after.

The Government-Debt Tipping Point Is Nonsense

There are serious problems with the economics paper by Reinhart and Rogoff, whose recommendations were widely followed
Reinhart/Rogoff’s dismissal of criticism of their deeply flawed study as “academic kerfuffle” is unconscionably cavalier.