Erik J. Larson

Fellow, Technology and Democracy Project

Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

Larson’s Ph.D. dissertation served as the basis for the writing of a provisional patent on using hierarchical classification techniques to locate specific event mentions in free text. His work on supervised machine learning methods for information extraction and natural language processing (NLP) helped him found a software company in 2007 dedicated to research and development on classifying blogs and other web text online. Larson wrote several successful proposals to the Defense Advanced Research Projects Agency (DARPA) and was awarded over $1.7 Million in funding to perform cutting-edge work in AI. His company was based in Austin, Texas and Palo Alto, California.

In addition to founding and heading the company, Larson has over a decade of experience as a professional software developer and a scientist in NLP, a central field in artificial intelligence. He worked on the famous “Cyc” project at Cycorp, a decades-long effort to encode commonsense into machines led by Carnegie Mellon and Stanford professor, and AI pioneer, Douglas Lenat. Cycorp is best-known as a company dedicated to engineering the world’s first commonsense knowledge base, and Larson’s experience as an engineer at Cycorp has proven invaluable in his ongoing attempt to understand the challenges AI systems face in the real world. Larson has also held a position as a research scientist at the IC2 Institute at The University of Texas at Austin, where he led a team of researchers doing work on information extraction techniques for free text, a project funded in part by Lockheed Martin’s Advanced Technology Laboratories. He has held the title of Chief Scientist in an AI-based startup whose first customer was Dell (Dell Legal), Senior Research Engineer at AI company 21st Century Technologies in Austin, worked as an NLP consultant for Knowledge Based Systems, Inc., and has consulted with other companies in Austin, helping to design AI systems that solve problems in natural language understanding.

Archives

How Fruit Flies, Bees, and Squirrels Beat Artificial Intelligence

AI researchers assume they are on the path to intelligence, yet intelligence itself remains a mystery and many animals do better than current AI
Real intelligence is embodied. It exists within a living system, interacting dynamically with an environment. AI, on the other hand, is an abstraction.

AI in Biology: The Future AI Didn’t Predict

It doesn’t look like the past. Physical systems that evolve over time but don’t follow a fixed formula have always presented a deep challenge to AI
The problem of outliers or “edge cases” has frustrated AI scientists and engineers (and now structural biologists) for decades, and there’s no good answer yet.

AI in Biology: The Disease Connection — When Proteins Go Wrong

Some of the most crucial proteins for human health—the ones we need to understand most urgently—are the very ones that AI has the hardest time modeling
The issue is not simply that AI struggles with intrinsically disordered regions — it is that the very premise of IDR behavior contradicts the way these models operate.

AI in Biology: So Is This the End of the Experiment? No.

But a continuing challenge is that many of the most biologically important proteins don’t adopt a single stable structure. Their functions depend on structural fluidity
The core issue AI isn’t just missing data — AlphaFold’s entire approach is built on assumptions that don’t apply to disordered proteins.

AI in Biology: What Difference Did the Rise of the Machines Make?

AI works very well for proteins that lock into a single configuration, as many do. But intrinsically disordered ones don’t play by those rules
The resulting problems aren’t a temporary bug — they’re a basic limitation of training a machine learning model on a dataset where proteins always fold neatly.

AI in Biology: AI Meets Intrinsically Disordered Proteins

Protein folding — the process by which a protein arrives at its functional shape — is one of the most complex unsolved problems in biology
The mystery of protein folding remains unsolved because, as is so often the case with AI narratives, the reality is much more complicated than the hype.

The Left Brain Delusion: Are We Steamrolling Human Agency?

The two hemispheres of our brain really do see the world differently
Techno-futurists love to dream up visions of the future. Invariably, these are worlds where everything is under control—where every problem has a solution, and the future unfolds exactly as planned. We do seem to be moving toward some sort of centralized loss of agency. But what’s distinctive about the techno-futurist vision is the belief that this is not only inevitable but wonderful. Self-driving cars eliminate wasted time in traffic; smart cities like Songdo or Masdar City adjust every streetlight and service in real-time to optimize efficiency. AI-driven healthcare, like the tools developed by Google’s DeepMind, promises to  pinpoint diagnoses. Automated finance uses algorithms to manage our money and secure our futures. Everything works, all the

The New Tower of Babel

The old ways of arguing and understanding each other are on the decline, if not on life support.
We all know Babel (no, not the language learning company). It’s in Genesis. The Biblical story about God making so many languages and dialects and (let’s add) opinions that no one could understand each other or effectively communicate. One legacy of the triumph of digital technology and AI in every corner of our existence is that we’ve recreated this Babel. Let me try to unpack this, and bear with me if it seems I’m saying something derogatory about one belief or another — my aim is to avoid that game and try to explain the mechanism, the social and cultural story, by which our new Babel is ascendant, and the old ways of arguing and understanding each other are on the decline, if not on life support. Start with an oldy but goody: the old war between scientific materialists

If AI’s Don’t Know What They’re Doing, Can We Hope to Explain It?

With AI, we have a world of powerful, useful, but entirely opaque systems. We don’t know why they make decisions and neither do they
In yesterday’s post, I talked about the fact that AI’s don’t understand the work they’re doing. That makes the goal — to make them think like people — elusive. This brings us to the second problem, which ended up spawning an entire field, known as “Explainable AI.” Neural networks not only don’t know what they’re doing when they do it, they can’t in general explain to their designers or users why they made such-and-such a decision. They’re a black box; in other words, they are obstinately opaque to any attempts at a conceptual understanding of their decisions or inferences. How does that play out? It means, for example, that, with image recognition tasks like facial recognition, the network can’t explain why it thought someone was a criminal (because he

Why, Despite All the Hype We Hear, AI Is Not “One of Us”

It takes an imaginative computer scientist to believe that the neural network knows what it’s classifying or identifying. It’s a bunch of relatively simple math
The AI scientist’s dream of general intelligence, often referred to as Artificial General Intelligence (AGI), remains as elusive as ever.

The Present Shock We’re Experiencing

Our modern obsession with the possibility of truly smart machinery keeps a self-important anti-humanism alive and kicking.
It’s downright bizarre to view Big Data as a replacement for human ingenuity and possibility.

This is Digital McCarthyism

Far from being liberated by these technologies, we have been plunged back into the worst abuses of surveillance and privacy violation.
The notion that we’re getting somewhere, making progress, is remarkably durable. It survives wars, financial collapse, riots, scandals, stagnating wages, and climate change (to name a few). Though techno-futurists are also fond of AI apocalypse scenarios, where artificial intelligence somehow “comes alive,” or at any rate uses its superior intelligence to make an autonomous decision to wipe out humanity, much more ink has been spilled this century prognosticating indomitable technical progress, which somehow stands in for human progress generally. But sanguine belief in progress is belied by the actual events of the twenty-first century. Computers have gotten faster and AI more powerful, but digital technology has also been used to spread misinformation, make deep fakes, and

The Modern World’s Bureaucracy Problem

The Iron law states that any market reform or government initiative aimed at shrinking bureaucracy ends up expanding it.
Bureaucracy keeps getting bigger, no matter how much politicians talk about taming it.

Is ChatGPT a Dead End?

There is still no known path to Artificial General Intelligence, including ChatGPT.
I have friends and colleagues who’ve gone almost religious about LLMs and conversational systems like ChatGPT. They tell me it’s true AGI. I disagree.

What Mission Impossible Tells Us About AI Mythology

If you’re looking for an intelligent take on existential risk and superintelligent AI, the latest Mission Impossible movie is not for you.
If you’re looking for an intelligent take on existential risk and superintelligent AI, the latest Mission Impossible movie is not for you.