Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence

Archives

Machines Can’t Teach Us How To Learn

A recent study used computer simulations to test the “small mistakes” rule in human learning

Machine learning is not at all like human learning. For example, machine learning frequently requires millions of examples. Humans learn from a few examples.

Just a light frost—or AI winter?

It’s nice to be right once in a while—check out the evidence for yourself

About a year ago, I wrote that mounting AI hype would likely give way to yet another AI winter. Now, according to the panelists at “the world’s leading academic AI conference” the temperature is already falling.

Death Spurs Demand for More Oversight of Self-Driving Cars

The National Transportation Safety Board seeks uniform standards for the previously voluntary information provided by carmakers

Despite the hype and a few bad actors, here at the Walter Bradley Institute, we believe in AI. The deployment of any technology—dams, bridges, buildings—requires care and, at times, oversight. Not to slow down progress but to protect us from ourselves. As the great scientist Richard Feynman put it, the easiest person to fool is oneself.

I Am Giving Up Cycling

It’s just not worth it if a machine can beat me
It’s not that I cannot cycle or that I don’t like to or that I’m not good at it (for a human). But just the other day, as I pedaled along, I was passed by a motorcycle. Its speed was incredible! I appeared to be pedaling in place as the machine zoomed into the distance. In that moment, it became all too clear that my days as a meaningful human were ending. The machine was my better. Okay. That is not true. I am not going to quit cycling. And, being passed by a motorcycle—a machine we built purposely to go faster than anything our two legs can achieve—is not a meaningful measure of my prowess as a cyclist. Many people must agree with me. The Tour de France (est. (1903) did not start until decades after the first motorcycles (winner Maurice Garin pictured above

What Can the Cybertruck Tell Us About Silicon Valley?

Does Elon Musk’s view of human beings help account for his new truck’s massive armor?

Pardon me but, while I know that a good truck needs to be tough, I never thought it needed to be a Mad Max-styled warrior vehicle. Apparently, Musk does. Why?

Pizza Robots Get the Pink Slip

True, the doughbots didn’t make good pizza. But is the message about them or something else?

I have nothing against robots. (I am against bad pizza.) I do, however, get very tired of the science fiction-fantasy of humanity-squashing robots. And that’s all it is: A fantasy.

Should AI-Written News Stories Have Bylines? Whose?

Like it or not, AI is here to stay. So, how do we make the best use of it in writing?

Automation can help some aspects of writing. But media outlets get tech “google”-eyes and too often fail to ask the hard questions about what they are automating, how, and why.

Big Tech Tries to Fight Racist and Sexist Data

The trouble is, no machine can be better than its underlying training data. That’s baked in

The problem with machine learning-based AI in police work is not so much its inherent bias (none of us is bias-free) but the delegation to a machine of what should be a human decision.

Can Predictive Text Replace Writers?

A New Yorker staff writer ponders his future and the machine’s

Predictive text analyzes what was written and guesses what comes next. Whole sentences, paragraphs, and essays. Seabrook tested it on his own work…

Alpha Go as Alpha Maybe?

DeepMind's AlphaGo defeated a world-champion Go player but further gains were hard won at best
The question scientists must ask, especially about an unexpected finding, is, if no one can reproduce your results, did you discover something new or did you just get lucky? With AI that’s not easy, due to dependence on randomness.

Has Aristo broken bounds for thinking computers?

The Grade 8 graduate improves on Watson but we must still think for ourselves at school. Here’s why
Aristo combines questions and answers on a multiple-choice test to decide on the best answer without understanding any of the information.

Alexa Really Does Not Understand Us

In a recent test, only 35 percent of the responses to simple questions were judged adequate
Actually, I am impressed that voice assistants work as well as they do, given the number of AI problems that were solved. But consider how much more complex the problems facing a self-driving car are.

Tell Kids the Robot Is “It,” Not “He”

Teaching children to understand AI and robotics is part of a good education today
We are not truly likely to be ruled by AI overlords (as opposed to powerful people using AI. But even doubtful predictions may be self-fulfilling if enough impressionable people come to believe them. Children, for example. We adults are aware of the limitations of AI. But if we talk about AI devices as if they were people, children—who often imbue even stuffed toys with complex personalities—may be easily confused. Sue Shellenbarger, Work & Family columnist at The Wall Street Journal, warns that already, “Many children think robots are smarter than humans or imbue them with magical powers.” While she admits that the “long-term consequences” are still unclear, “an expanding body of research” suggests we need to train children to draw boundaries between

Will Industry Pressure Loosen Self-Driving Car Tests?

Right now, the regulatory agency is under pressure to accept the industry’s “softball” testing suggestions
The regulatory agency (NHTSA) needs to adapt. But trusting technical documentation alone or only testing already sold vehicles is grossly insufficient. Technical documentation is what engineers think should happen; it is not the future. And testing sold vehicles creates an incentive to skimp on tests.

Are self-driving cars really safer?

A former Uber executive says no. Before we throw away the Driver’s Handbook…
Current claims that self-driving cars are safer are hype, not measurement. Meanwhile, Congress is expected to push for legislation next month to pave the way for widespread use of self-driving vehicles without a consensus on safety standards.

Does a Western Bias Affect Self-Driving Cars?

How a driver is expected to act varies by culture
Self-driving cars (autonomous vehicles) will need to adapt to different rules and we will, very likely, need to change those rules to make the vehicles work.