Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence

Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

Archives

Is AI really better than physicians at diagnosis?

The British Medical Journal found a serious problem with the studies

Of 83 studies of the performance of the Deep Learning algorithm on diagnostic images, only two had been randomized, as is recommended, to prevent bias in interpretation.

Will the COVID-19 Pandemic Promote Mass Automation?

Caution! Robots don’t file for benefits but that’s not all we need to know about them

I understand the panic many business leaders experience as they try to stay solvent while customers evaporate. Panic, however, is a poor teacher: AI-based automation will not only not solve all their problems, it may very well add to them. AI is not a magic box into which we can stuff them and make them disappear.

Star self-driving truck firm shuts; AI not safe enough soon enough

CEO Stefan Seltz-Axmacher is blunt about the cause: Machine learning “doesn’t live up to the hype”

Starsky Robotics was not just another startup overwhelmed by business realities. In 2019, it was named one of the world’s 100 most promising start-ups (CNBC) and one to watch by FreightWaves, a key trucking industry publication. But the AI breakthroughs did not appear.

AI Is Not Ready to Moderate Content!

In the face of COVID-19 quarantines for human moderators, some look to AI to keep the bad stuff off social media

Big social media companies have long wanted to replace human content moderators with AI. COVID-19 quarantines have only intensified that discussion. But AI is far, far from ready to successfully moderate content in an age of where virtual monopolies make single point failure a frequent risk.

All AI’s Are Psychopaths

We can use them but we can’t trust them with moral decisions. They don’t care why

Building an AI entails moving parts of our intelligence into a machine. We can do that with rules, (simplified) virtual worlds, statistical learning… We’ll likely create other means as well. But, as long as “no one is home”—that is, the machines lack minds—gaps will remain and those gaps, without human oversight, can put us at risk.

The “Moral Machine” Is Bad News for AI Ethics

Despite the recent claims of its defenders, there is no way we can outsource moral decision-making to an automated intelligence

Here’s the dilemma: The Moral Machine (the Trolley Problem, updated) feels necessary because the rules by which we order our lives are useless with automated vehicles. Laws embody principles that we apply. Machines have no mind by which to apply the rules. Instead researchers must train them with millions of examples and hope the machine extracts the correct message… 

Machines Never Lie but Programmers… Sometimes

A creative claim is floating around out there that bad AI results can arise from machine “deception”

We might avoid worrying that our artificial intelligence machines are trying to deceive us if we called it “Automated Intelligence rather than “Artificial Intelligence.”

Are Facial Expressions a Clear, Simple Basis for Hiring Decisions?

Marketing AI to employers to analyze facial expressions ignores the fact that correlation is NOT causation

Have you heard of the Law of the Instrument? It just means, to quote one formulation, “He that is good with a hammer tends to think everything is a nail.” All any given problem needs is a good pounding. This is a risk with AI, as with amateur carpentry. But with AI, it can get you into more serious trouble. Take hiring, for instance.

McAfee: Assisted Driving System Is Easily Fooled

Defacing a road sign caused the system to dramatically accelerate the vehicle

Over time, machine vision will become harder to fool than the one that was recently tricked into rapid acceleration by a defaced sign. But it will still be true that a fooled human makes a better decision than a fooled machine because the fooled human has common sense, awareness, and a mind that reasons.

AI has changed our relationship to our tools

If a self-driving car careens into a storefront, who’s to blame? A new Seattle U course explores ethics in AI

A free course at Seattle University addresses the “meaning of ethics in AI.” I’ve signed up for it. One concern that I hope will be addressed is: We must not abdicate to machines the very thing that only we can do: Treat other people fairly.