Erik J. Larson

Fellow, Technology and Democracy Project

Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

Larson’s Ph.D. dissertation served as the basis for the writing of a provisional patent on using hierarchical classification techniques to locate specific event mentions in free text. His work on supervised machine learning methods for information extraction and natural language processing (NLP) helped him found a software company in 2007 dedicated to research and development on classifying blogs and other web text online. Larson wrote several successful proposals to the Defense Advanced Research Projects Agency (DARPA) and was awarded over $1.7 Million in funding to perform cutting-edge work in AI. His company was based in Austin, Texas and Palo Alto, California.

In addition to founding and heading the company, Larson has over a decade of experience as a professional software developer and a scientist in NLP, a central field in artificial intelligence. He worked on the famous “Cyc” project at Cycorp, a decades-long effort to encode commonsense into machines led by Carnegie Mellon and Stanford professor, and AI pioneer, Douglas Lenat. Cycorp is best-known as a company dedicated to engineering the world’s first commonsense knowledge base, and Larson’s experience as an engineer at Cycorp has proven invaluable in his ongoing attempt to understand the challenges AI systems face in the real world. Larson has also held a position as a research scientist at the IC2 Institute at The University of Texas at Austin, where he led a team of researchers doing work on information extraction techniques for free text, a project funded in part by Lockheed Martin’s Advanced Technology Laboratories. He has held the title of Chief Scientist in an AI-based startup whose first customer was Dell (Dell Legal), Senior Research Engineer at AI company 21st Century Technologies in Austin, worked as an NLP consultant for Knowledge Based Systems, Inc., and has consulted with other companies in Austin, helping to design AI systems that solve problems in natural language understanding.

Archives

Artificial Intelligence, Science and the Limits of Knowledge

In Part 3, I show that AI, like science, has limits. It depends on narrowing a problem: making it specific, discarding most possibilities, sealing it inside a representation and specification
Here’s the problem with artificial general intelligence: asking how to make systems “general” is asking how to remove the very constraints that made them work.

Surprise: Artificial intelligence Is Still Just Automation

I wrote this in 2016. And it is still true in 2025. A reflection in three parts
When I first wrote this almost a decade ago, “AI” was already a cultural Rorschach test. To some, it was exciting and futuristic. To others, it was ominous, Orwellian, or just marketing spin. Automation, by contrast, was the unglamorous cousin that conjured images of soulless machines taking over the last shreds of human purpose. But from the start, my view was simple: what we call “AI” today is still just automation. And automation is not a mind. That argument has aged better than I expected. In the years since, we’ve seen an explosion of so-called AI — from self-driving cars to ChatGPT — yet the distinction between AI and automation remains almost universally misunderstood. Recently, computational linguist Emily Bender and Alex Hanna, in The AI Con: How to

Part 3: A Wren Arrives — and Ruffles Many a Feather

Dr. Wren, a cognitive scientist, identifies a problem with assuming that adding another ten thousand pigeons to the project will produce novel designs...
We remain confronted by the same old mystery: who, or what, imagines the birdhouse in the first place?

Part 2: Have the Superbirds Arrived? Are They Taking Over?

Dr. Avian now claims that his work with trained birds show that intelligence does not require inner models or internal representations, as formerly thought
Avian is perfectly clear: There is no mind at all in Coordinated Avian Models (CAMs). And yet, they’re behind a staggering number of new designs.

Why AI Breaks Down Where Human Creativity Begins

Part 1: AI can handle statements that are internally coherent but that is not the same thing as correspondence with reality
In short, philosophers distinguish between two fundamental theories of truth: correspondence and coherence, and AI does only coherence.

The Limits of What We Can Learn From Studying Creativity

In this third and final part of my essay, I look at what sets us apart from machines: Our capacity to leap from commonsense inferences to entirely new ways of understanding reality
People struggling in the aftermath of brain injuries provide some valuable insights into that leap.

Stranger Things: Why Mad Scientists Are Mad

At the highest levels, creativity seems to bypass the deliberate, structured thought process altogether
The real danger of reductionism is not just that it fails to explain creativity, but that it actively encourages dismissal of what cannot be reduced.

The Slow Decline of a Key Aspect of Creativity

The mechanization of mind is changing how we think about creativity — and not in a good way
In this first of three parts, I look at the role of serendipity — the art of making happy, unexpected discoveries — and how a mechanized world diminishes it.

Part 2: The Fiction of Generalizable AI: How to Game the System

Progress toward real generalization, by any substantive measure, is nil. Perhaps we should reexamine the very concept of the “I” in AI
In too many discussions, intelligence is treated as if it were a linear phenomenon that more scaling and a few extra gigabytes of data will solve.

The Fiction of Generalizable AI: A Tale in Two Parts

Why intelligence isn’t a linear scale — and why true generalization remains unsolved
The big idea behind generative AI (mistakenly) assumes a network can start blank and be transformed into an intelligent agent simply via enough data.

The Linda Problem Revisited, As If Reality Matters

Part 2: AI enthusiasts use false claims for humans' “natural stupidity” to bolster claims for machine intelligence
The people who flunked the Linda problem were not biased; they just assumed there was some POINT to telling them that Linda was active in social justice issues.