Jeffrey Funk

Fellow, Walter Bradley Center for Natural and Artificial Intelligence

Jeffrey Funk is the winner of the NTT DoCoMo Mobile Science Award and the author of five books. His forthcoming book, Unicorns, Hype and Bubbles: A Guide to Spotting, Avoiding and Exploiting Investment Bubbles In Tech, will be published by Harriman House this fall.

Archives

Musk and LeCun Have a Superficial Debate About Science

What would have been a better debate?
Elon Musk tweeted the following: “Join xAI if you believe in our mission of understanding the universe, which requires maximally rigorous pursuit of the truth, without regard to popularity or political correctness.” Yann LeCun, chief scientist at tech giant Meta, could not resist responding. Musk claims to “want a maximally rigorous pursuit of the truth but spews crazy-ass conspiracy theories on his own social platform.” It escalated quickly, with Musk questioning what science LeCun had done in the past five years, and LeCun replying: “Over 80 technical papers published since January 2022. What about you?” LeCun then said: “If you do research and don’t publish, it’s not science”. So the most successful engineer over the last ten years criticizes academic

Hype Distracts AI Engineers from Real Work

Who is going to solve AI's actual problems?
One reason there wasn’t an emphasis on reducing hallucinations is because the problem is hard. Some argue that hallucinations are “baked into" AI chatbots.

AI’s Illusion of Rapid Progress

It always seems to be on the verge of perfection
Too many people are extrapolating from the systems that are purportedly automated, even though they aren’t yet working properly.

The State of Innovation and the Impact of AI

In this episode, host Robert J. Marks discusses the state of innovation and the impact of AI with guest Jeffrey Funk, author of the book Technology Change and the Rise of New Industries. They discuss the hype around AI, the limitations of large language models like GPT-3, the slowing rate of innovation, the impact of Goodhart’s Law on academia, and the need for a shift in metrics and a focus on practical applications. They also touch on the role of universities and corporations in driving innovation and the need for cross-fertilization and collaboration. Overall, they express skepticism about the current state of AI and emphasize the importance of measuring success based on real-world impact rather than just publications and metrics. Additional Resources Technology

Sundar Pichai Says AI Will Be as Big as Fire

The AI bubble is going to pop.
Ask someone how big AI will be, and the answer is likely huge. But how big is huge? Why does this matter? Because big forecasts encourage big investments, trials, and purchases. After big consulting companies predicted eight years ago that AI would have economic gains of about $15 trillion by 2030, many countries and companies felt the need to pay for their own reports from those same consultants. Of course, those consultants said that those countries could experience rapid productivity gains and those companies could experience rising profits if they implemented AI in the right way, which was of course under the guidance of the consulting companies! Eight years later and few of their predictions have come true. But their optimistic predictions are back again, with big forecasts

Sora: Life Is Not a Multiple-Choice Test

With Sora, as with other generative AI developments, some are quick to proclaim that artificial general intelligence has arrived. Not so fast.
The hallucinations are symptomatic of generative AI models’ core problem: they can’t identify output problems because they know nothing about the real world.

Why Do Universities Ignore Good Ideas?

Funding agencies see if the researcher is tenured or has already received funding. It's a vicious cycle.
Katalin Karikó’s Nobel Prize didn’t prove that universities don’t fund good ideas. It merely reminded us that they rarely do.

Are Good Ideas Hard to Find?

This academic paper tells us a lot about why innovation has slowed
Many do not think of these small ideas, most of them highly technical, that enabled the improvements in chips, crop yields and new drugs.

When it Comes to New Technologies Like AI, Tempers Run Hot

So far, the most tangible LLM successes have been in generating political disinformation and phishing scams.
LLMs often remind us of clueless students who answer essay questions by writing everything they think is relevant, hoping the right answer is in there somewhere

How Do We Define Successful Use Cases for Generative AI?

Current generative AI systems are designed to give us the most common solutions, instead of the new ones we need.
Evidence that existing ideas aren’t so good can be seen in the big startup losses, slow diffusion of technology, and slow rate of productivity improvement.

Why Are We Obsessed With How Smart AI Is?

The people with the most specific knowledge should be assessing applications for AI and their risks.
The biggest lesson from giving university exams to ChatGPT is that students should be tested in other ways.

Using Data Like a Drunk Uses a Lamppost

Startup companies can be tempted to use statistics for support instead of real illumination
Karl Pearson, the great English mathematician and statistician, wrote, “Statistics is the grammar of science.” But it can be used to mislead, too.