Gary N. Smith

Senior Fellow, Walter Bradley Center for Natural and Artificial Intelligence

Gary N. Smith is the Fletcher Jones Professor of Economics at Pomona College. His research on financial markets statistical reasoning, and artificial intelligence, often involves stock market anomalies, statistical fallacies, and the misuse of data have been widely cited. He is the author of dozens of research articles and 16 books, most recently, The Power of Modern Value Investing: Beyond Indexing, Algos, and Alpha, co-authored with Margaret Smith (Palgrave Macmillan, 2023).

Archives

Bad Luck Seldom Persists — But it Never Guarantees Good Luck

Many people embrace the fallacious law of averages in their daily lives when "regression toward the mean" is a more realistic picture
For example, the baseball player with the highest batting average in any season generally does not do as well the season before or the season after.

The Government-Debt Tipping Point Is Nonsense

There are serious problems with the economics paper by Reinhart and Rogoff, whose recommendations were widely followed
Reinhart/Rogoff’s dismissal of criticism of their deeply flawed study as “academic kerfuffle” is unconscionably cavalier.

AI Is Still a Delusion

Following instructions and performing fast, tireless, error-free calculations is not intelligence in any meaningful sense of the word
Not knowing what words mean, neither OpenAI’s ChatGPT 3.5 nor Microsoft’s Copilot nor Google’s Gemini can do a simple logic test.

LLMs Can’t Be Trusted for Financial Advice

The LLM responses demonstrated that they do not have the common sense needed to recognize when their answers are obviously wrong
It takes an experienced financial planner to distinguish between good and bad advice, so clients may as well skip the LLMs and go to the knowledgeable human.

A Man, A Boat, and a Goat — and a Chatbot!

Forty-five years ago, Douglas Hofstadter noted a key problem with AI: It can’t do the astonishing things our brains do, as chatbots reveal when asked to solve puzzles
Not understanding what words mean or how they relate to the real world, chatbots have no way of determining whether their responses are sensible, let alone true

The AI Hype Machine Just Rolls On, Living on Exhaust

Even chatbot enthusiasts, are starting to admit that scaling up LLMs will not create genuine artificial intelligence
Decades of geniuses trying to build computers that are as intelligent as they are have shown how truly remarkable our brains are—and how little we understand.

The Flea Market of the Internet: Breaking the Addiction

When, after a bad experience, I called Amazon the “Walmart of the Internet,” a friend pointed out that Amazon is, in fact, much worse than Walmart
Internet-based businesses tend to follow a life cycle in which quality deteriorates over time. Writer Cory Doctorow calls the process “enshittification.”

Sora: Life Is Not a Multiple-Choice Test

With Sora, as with other generative AI developments, some are quick to proclaim that artificial general intelligence has arrived. Not so fast.
The hallucinations are symptomatic of generative AI models’ core problem: they can’t identify output problems because they know nothing about the real world.

Retracted Paper Is a Compelling Case for Reform

The credibility of science is being undermined by misuse of the tools created by scientists. Here's an example from an economics paper I was asked to comment on
In my book Distrust (Oxford 2023), I recommend that journals not publish data-driven research without public access to nonconfidential data and methods used.

Why Chatbots (LLMs) Flunk Routine Grade 9 Math Tests

Lack of true understanding is the Achilles heel of Large Language Models (LLMs). Have a look at the excruciating results
Chatbots don’t understand, in any meaningful sense, what words mean and therefore do not know how the given numbers should be used.

Internet Pollution — If You Tell a Lie Long Enough…

Large Language Models (chatbots) can generate falsehoods faster than humans can correct them. For example, they might say that the Soviets sent bears into space...
Later, Copilot and other LLMs will be trained to say no bears have been sent into space but many thousands of other misstatements will fly under their radar.

Computers Still Do Not “Understand”

Don't be seduced into attributing human traits to computers.
Imagine people making decisions that are influenced by an LLM that does not understand the meaning of any of the words it inputs and outputs.

When it Comes to New Technologies Like AI, Tempers Run Hot

So far, the most tangible LLM successes have been in generating political disinformation and phishing scams.
LLMs often remind us of clueless students who answer essay questions by writing everything they think is relevant, hoping the right answer is in there somewhere