Gary N. Smith

Senior Fellow, Walter Bradley Center for Natural and Artificial Intelligence

Gary N. Smith is the Fletcher Jones Professor of Economics at Pomona College. His research on financial markets statistical reasoning, and artificial intelligence, often involves stock market anomalies, statistical fallacies, and the misuse of data have been widely cited. He is the author of dozens of research articles and 16 books, most recently, The Power of Modern Value Investing: Beyond Indexing, Algos, and Alpha, co-authored with Margaret Smith (Palgrave Macmillan, 2023).

Archives

The AI Hype Machine Just Rolls On, Living on Exhaust

Even chatbot enthusiasts, are starting to admit that scaling up LLMs will not create genuine artificial intelligence
Decades of geniuses trying to build computers that are as intelligent as they are have shown how truly remarkable our brains are—and how little we understand.

The Flea Market of the Internet: Breaking the Addiction

When, after a bad experience, I called Amazon the “Walmart of the Internet,” a friend pointed out that Amazon is, in fact, much worse than Walmart
Internet-based businesses tend to follow a life cycle in which quality deteriorates over time. Writer Cory Doctorow calls the process “enshittification.”

Sora: Life Is Not a Multiple-Choice Test

With Sora, as with other generative AI developments, some are quick to proclaim that artificial general intelligence has arrived. Not so fast.
The hallucinations are symptomatic of generative AI models’ core problem: they can’t identify output problems because they know nothing about the real world.

Retracted Paper Is a Compelling Case for Reform

The credibility of science is being undermined by misuse of the tools created by scientists. Here's an example from an economics paper I was asked to comment on
In my book Distrust (Oxford 2023), I recommend that journals not publish data-driven research without public access to nonconfidential data and methods used.

Why Chatbots (LLMs) Flunk Routine Grade 9 Math Tests

Lack of true understanding is the Achilles heel of Large Language Models (LLMs). Have a look at the excruciating results
Chatbots don’t understand, in any meaningful sense, what words mean and therefore do not know how the given numbers should be used.

Internet Pollution — If You Tell a Lie Long Enough…

Large Language Models (chatbots) can generate falsehoods faster than humans can correct them. For example, they might say that the Soviets sent bears into space...
Later, Copilot and other LLMs will be trained to say no bears have been sent into space but many thousands of other misstatements will fly under their radar.

Computers Still Do Not “Understand”

Don't be seduced into attributing human traits to computers.
Imagine people making decisions that are influenced by an LLM that does not understand the meaning of any of the words it inputs and outputs.

When it Comes to New Technologies Like AI, Tempers Run Hot

So far, the most tangible LLM successes have been in generating political disinformation and phishing scams.
LLMs often remind us of clueless students who answer essay questions by writing everything they think is relevant, hoping the right answer is in there somewhere

Large Language Models are Still Smoke and Mirrors

Incapable of understanding, LLMs are good at giving bloated answers.
I recently received an email invitation from Google to try Gemini Pro in Bard. There was an accompanying video demonstration of Bard’s powers, which I didn’t bother watching because of reports that a Gemini promotional video released a few days earlier had been faked. After TED organizer Chris Anderson watched the video, he tweeted, “I can’t stop thinking about the implications of this demo. Surely it’s not crazy to think that sometime next year, a fledgling Gemini 2.0 could attend a board meeting, read the briefing docs, look at the slides, listen to every one’s words, and make intelligent contributions to the issues debated? Now tell me. Wouldn’t that count as AGI?” Legendary software engineer Grady Booch replied, “That demo was incredibly edited to

LLMs Are Still Faux Intelligence

Large language models are remarkable but it's a huge mistake to think they're "intelligence" in any meaningful sense of the word.
It is wishful thinking to interpret these results and other LLM performances as evidence of logical reasoning.

A Modest Proposal for the MLB

Major League Baseball got greedy and needs to reform.
If the regular season is devalued and the playoffs are recognized as essentially meaningless, the MLB may actually lose money.

The MLB Coin-Flipping Contest

What are the chances that wild-card teams will make it to the World Series and win?
One anomaly this year is that Seattle didn’t qualify for the playoffs even though it had a better record than the Central Division winner, Minnesota.

Blue Zone BS: The Longevity Cluster Myth

We need to be reminded how much real science has done for us and how real science is done.
Real science is currently under siege, pummeled by conspiracy nuts and undermined internally by a replication crisis created by sloppy science.

Confusing Correlation with Causation

Computers are amazing. But they can't distinguish between correlation and causation.
Artificial intelligence (AI) algorithms are terrific at discovering statistical correlations but terrible at distinguishing between correlation and causation. A computer algorithm might find a correlation between how often a person has been in an automobile accident and the words they post on Facebook, being a good software engineer and visiting certain websites, and making loan payments on time and keeping one’s phone fully charged. However, computer algorithms do not know what any of these things are and consequently have no way of determining whether these are causal relationships (and therefore useful predictors) or fleeting coincidences (that are useless predictors). If the program is black box, then humans cannot intervene and declare that these are almost certainly irrelevant

Sabrina Ionescu’s Hot Hand

When basketball players hit a "streak," does that elevate the probability of success?
Athletes do sometimes get hot—not that their chance of success is 100% but that it is temporarily elevated above their normal probability.

Using Data Like a Drunk Uses a Lamppost

Startup companies can be tempted to use statistics for support instead of real illumination
Karl Pearson, the great English mathematician and statistician, wrote, “Statistics is the grammar of science.” But it can be used to mislead, too.

The LLM Deep Fake—Follow the Money

Hype sells. Just look at what's happening with AI
Large language models (LLMs) are far more impressive now than they were just a few months or a year ago. Compare these exchanges I had with GPT-3 in June 2022 and with ChatGPT two days ago: Gary:          Can I improve my test scores if I study after taking the test? GPT-3:        That’s a good question. I’m not sure if you can improve your test scores by studying after taking the test, but it couldn’t hurt to try! ChatGPT:   Studying after taking a test is unlikely to directly improve your scores on that particular test since the test has already been completed. Once a test is finished, the opportunity to answer questions and provide responses has passed. However, studying after taking a test can still have benefits. It allows you to review