Erik J. Larson

Fellow, Technology and Democracy Project

Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

Larson’s Ph.D. dissertation served as the basis for the writing of a provisional patent on using hierarchical classification techniques to locate specific event mentions in free text. His work on supervised machine learning methods for information extraction and natural language processing (NLP) helped him found a software company in 2007 dedicated to research and development on classifying blogs and other web text online. Larson wrote several successful proposals to the Defense Advanced Research Projects Agency (DARPA) and was awarded over $1.7 Million in funding to perform cutting-edge work in AI. His company was based in Austin, Texas and Palo Alto, California.

In addition to founding and heading the company, Larson has over a decade of experience as a professional software developer and a scientist in NLP, a central field in artificial intelligence. He worked on the famous “Cyc” project at Cycorp, a decades-long effort to encode commonsense into machines led by Carnegie Mellon and Stanford professor, and AI pioneer, Douglas Lenat. Cycorp is best-known as a company dedicated to engineering the world’s first commonsense knowledge base, and Larson’s experience as an engineer at Cycorp has proven invaluable in his ongoing attempt to understand the challenges AI systems face in the real world. Larson has also held a position as a research scientist at the IC2 Institute at The University of Texas at Austin, where he led a team of researchers doing work on information extraction techniques for free text, a project funded in part by Lockheed Martin’s Advanced Technology Laboratories. He has held the title of Chief Scientist in an AI-based startup whose first customer was Dell (Dell Legal), Senior Research Engineer at AI company 21st Century Technologies in Austin, worked as an NLP consultant for Knowledge Based Systems, Inc., and has consulted with other companies in Austin, helping to design AI systems that solve problems in natural language understanding.


The New Tower of Babel

The old ways of arguing and understanding each other are on the decline, if not on life support.
We all know Babel (no, not the language learning company). It’s in Genesis. The Biblical story about God making so many languages and dialects and (let’s add) opinions that no one could understand each other or effectively communicate. One legacy of the triumph of digital technology and AI in every corner of our existence is that we’ve recreated this Babel. Let me try to unpack this, and bear with me if it seems I’m saying something derogatory about one belief or another — my aim is to avoid that game and try to explain the mechanism, the social and cultural story, by which our new Babel is ascendant, and the old ways of arguing and understanding each other are on the decline, if not on life support. Start with an oldy but goody: the old war between scientific materialists

If AI’s Don’t Know What They’re Doing, Can We Hope to Explain It?

With AI, we have a world of powerful, useful, but entirely opaque systems. We don’t know why they make decisions and neither do they
In yesterday’s post, I talked about the fact that AI’s don’t understand the work they’re doing. That makes the goal — to make them think like people — elusive. This brings us to the second problem, which ended up spawning an entire field, known as “Explainable AI.” Neural networks not only don’t know what they’re doing when they do it, they can’t in general explain to their designers or users why they made such-and-such a decision. They’re a black box; in other words, they are obstinately opaque to any attempts at a conceptual understanding of their decisions or inferences. How does that play out? It means, for example, that, with image recognition tasks like facial recognition, the network can’t explain why it thought someone was a criminal (because he

Why, Despite All the Hype We Hear, AI Is Not “One of Us”

It takes an imaginative computer scientist to believe that the neural network knows what it’s classifying or identifying. It’s a bunch of relatively simple math
The AI scientist’s dream of general intelligence, often referred to as Artificial General Intelligence (AGI), remains as elusive as ever.

The Present Shock We’re Experiencing

Our modern obsession with the possibility of truly smart machinery keeps a self-important anti-humanism alive and kicking.
It’s downright bizarre to view Big Data as a replacement for human ingenuity and possibility.

This is Digital McCarthyism

Far from being liberated by these technologies, we have been plunged back into the worst abuses of surveillance and privacy violation.
The notion that we’re getting somewhere, making progress, is remarkably durable. It survives wars, financial collapse, riots, scandals, stagnating wages, and climate change (to name a few). Though techno-futurists are also fond of AI apocalypse scenarios, where artificial intelligence somehow “comes alive,” or at any rate uses its superior intelligence to make an autonomous decision to wipe out humanity, much more ink has been spilled this century prognosticating indomitable technical progress, which somehow stands in for human progress generally. But sanguine belief in progress is belied by the actual events of the twenty-first century. Computers have gotten faster and AI more powerful, but digital technology has also been used to spread misinformation, make deep fakes, and

The Modern World’s Bureaucracy Problem

The Iron law states that any market reform or government initiative aimed at shrinking bureaucracy ends up expanding it.
Bureaucracy keeps getting bigger, no matter how much politicians talk about taming it.

Is ChatGPT a Dead End?

There is still no known path to Artificial General Intelligence, including ChatGPT.
I have friends and colleagues who’ve gone almost religious about LLMs and conversational systems like ChatGPT. They tell me it’s true AGI. I disagree.

What Mission Impossible Tells Us About AI Mythology

If you’re looking for an intelligent take on existential risk and superintelligent AI, the latest Mission Impossible movie is not for you.
If you’re looking for an intelligent take on existential risk and superintelligent AI, the latest Mission Impossible movie is not for you.

Don’t Expect AI to Revolutionize Science 

Data science is a downstream phenomenon. Thinking isn't. 
We need to start asking tough questions about what happened to the role of human thinking and insight in the discovery of how the world, and we, work.

How Can We Make Genuine Progress on AI?

True progress on AI means moving beyond induction and data analysis. Researchers must start taking the “commonsense knowledge problem” seriously.
True progress on AI — let alone human progress — means moving beyond induction and data analysis, an approach that is now over a decade old and saturating.