
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.
Larson’s Ph.D. dissertation served as the basis for the writing of a provisional patent on using hierarchical classification techniques to locate specific event mentions in free text. His work on supervised machine learning methods for information extraction and natural language processing (NLP) helped him found a software company in 2007 dedicated to research and development on classifying blogs and other web text online. Larson wrote several successful proposals to the Defense Advanced Research Projects Agency (DARPA) and was awarded over $1.7 Million in funding to perform cutting-edge work in AI. His company was based in Austin, Texas and Palo Alto, California.
In addition to founding and heading the company, Larson has over a decade of experience as a professional software developer and a scientist in NLP, a central field in artificial intelligence. He worked on the famous “Cyc” project at Cycorp, a decades-long effort to encode commonsense into machines led by Carnegie Mellon and Stanford professor, and AI pioneer, Douglas Lenat. Cycorp is best-known as a company dedicated to engineering the world’s first commonsense knowledge base, and Larson’s experience as an engineer at Cycorp has proven invaluable in his ongoing attempt to understand the challenges AI systems face in the real world. Larson has also held a position as a research scientist at the IC2 Institute at The University of Texas at Austin, where he led a team of researchers doing work on information extraction techniques for free text, a project funded in part by Lockheed Martin’s Advanced Technology Laboratories. He has held the title of Chief Scientist in an AI-based startup whose first customer was Dell (Dell Legal), Senior Research Engineer at AI company 21st Century Technologies in Austin, worked as an NLP consultant for Knowledge Based Systems, Inc., and has consulted with other companies in Austin, helping to design AI systems that solve problems in natural language understanding.
Archives


AI in Biology: The Future AI Didn’t Predict
It doesn’t look like the past. Physical systems that evolve over time but don’t follow a fixed formula have always presented a deep challenge to AI
AI in Biology: The Disease Connection — When Proteins Go Wrong
Some of the most crucial proteins for human health—the ones we need to understand most urgently—are the very ones that AI has the hardest time modeling
AI in Biology: So Is This the End of the Experiment? No.
But a continuing challenge is that many of the most biologically important proteins don’t adopt a single stable structure. Their functions depend on structural fluidity
AI in Biology: What Difference Did the Rise of the Machines Make?
AI works very well for proteins that lock into a single configuration, as many do. But intrinsically disordered ones don’t play by those rules
AI in Biology: AI Meets Intrinsically Disordered Proteins
Protein folding — the process by which a protein arrives at its functional shape — is one of the most complex unsolved problems in biology
Why Humans Aren’t That Biased, and Machines Aren’t That Smart
Claims about the cognitive biases that supposedly overwhelm our judgment should be taken with a helping of salt
Machine Intelligence and Reasoning: We Are Not on a Path to AGI
AI guru François Chollet’s Abstraction and Reasoning Corpus (ARC) proves we’re not on a path to AGI
From Data to Thoughts: Why Language Models Hallucinate
The limits of today’s language models and paths to real cognition
Why Human Intelligence Thrives Where Machines Fail
We're worried about AI and trust. We should be worried about something deeper
The Left Brain Delusion: Are We Steamrolling Human Agency?
The two hemispheres of our brain really do see the world differently
The New Tower of Babel
The old ways of arguing and understanding each other are on the decline, if not on life support.
If AI’s Don’t Know What They’re Doing, Can We Hope to Explain It?
With AI, we have a world of powerful, useful, but entirely opaque systems. We don’t know why they make decisions and neither do they
Why, Despite All the Hype We Hear, AI Is Not “One of Us”
It takes an imaginative computer scientist to believe that the neural network knows what it’s classifying or identifying. It’s a bunch of relatively simple math
The Present Shock We’re Experiencing
Our modern obsession with the possibility of truly smart machinery keeps a self-important anti-humanism alive and kicking.
This is Digital McCarthyism
Far from being liberated by these technologies, we have been plunged back into the worst abuses of surveillance and privacy violation.
The Modern World’s Bureaucracy Problem
The Iron law states that any market reform or government initiative aimed at shrinking bureaucracy ends up expanding it.
Is ChatGPT a Dead End?
There is still no known path to Artificial General Intelligence, including ChatGPT.
What Mission Impossible Tells Us About AI Mythology
If you’re looking for an intelligent take on existential risk and superintelligent AI, the latest Mission Impossible movie is not for you.