
Gary N. Smith is the Fletcher Jones Professor of Economics at Pomona College. His research on stock market anomalies, statistical fallacies, the misuse of data, and the limitations of AI has been widely cited. He is the author of more than 100 research papers and 18 books, most recently, Standard Deviations: The truth about flawed statistics, AI and big data, Duckworth, 2024.
Archives


Yes, Large Language Models May Soon be Smarter than Humans…
But not for the reason you think
LLMs Still Cannot be Trusted for Financial Advice
The limitations of Large Language Models (chatbots) are illustrated by their struggles with financial advice
Large Language Models: A Lack-of-Progress Report
They will not be as powerful as either hoped or feared
Machine Learning Algos Often Fail: They Focus on Data, Ignore Theory
Without a theory, a pattern is just a pattern
Yes, the AI Stock Bubble Is a Bubble
It's unfolding the way a financial bubble typically does
Why LLMs Are Not Boosting Productivity
If LLMs were as reliably useful as economist Tyler Cowen alleges, businesses would be using them to generate profits faster than LLMs generate text. They aren’t.
Intelligence Requires More Than Following Instructions
Post-training improves the accuracy and usefulness of LLMs but does not make them intelligent in any meaningful sense — as the Monty Hall problem shows
The Large Language Model (LLM) “Superpower” Illusion Dies Hard
Historic confirmation bias around ESP and spirit cabinets makes for an interesting comparison with the current need to believe in the abilities of LLMs
Why LLMs (chatbots) Won’t Lead to Artificial General Intelligence
The biggest obstacle is seldom discussed: Most consequential real-world decisions involve uncertainty
Some Lessons From DeepSeek, Compared With Other Chatbots
I tested OpenAI o1, Copilot, and Gemini Flash, along with DeepSeek, on a question about Tic-Tac-Toe
Sloppy Science is a Statistical Sin
Evidence of sloppy science encourages readers to wonder if the entire research project is compromised
The Hype and Limitations of Generative AI

AGI Is Not Already Here. LLMs Are Still Not Even Intelligent
Recent tests continue to show huge failures in comprehending common sense issues
Large Language Models (LLMs) Flunk Word Game Connections
Despite hype, ChatGPT and its competitors, in all their iterations, are still just text-generators based on statistical patterns in the text databases they train on
The Promise of Artificial General Intelligence is Evaporating
Revenue from corporate adoption of AI continues to disappoint and, so far, pales in comparison to the revenue that sustained the dot-com bubble — until it didn’t
Do Fantasy Sports Tell Us Something About Artificial Intelligence?
My biggest takeaway from my own involvement is how well fantasy football illuminates some weaknesses of artificial intelligence (AI)
The World Series of Coin Flips
Here we go again with the annual coin-flipping ritual known as the World Series
P-Hacking: The Perils of Presidential Election Models
History professor Alan Lichtman’s model uses 13 true/false questions reflecting likely voter interests. But some of them seem rather subjective