Of 83 studies of the performance of the Deep Learning algorithm on diagnostic images, only two had been randomized, as is recommended, to prevent bias in interpretation.
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.
I understand the panic many business leaders experience as they try to stay solvent while customers evaporate. Panic, however, is a poor teacher: AI-based automation will not only not solve all their problems, it may very well add to them. AI is not a magic box into which we can stuff them and make them disappear.
Starsky Robotics was not just another startup overwhelmed by business realities. In 2019, it was named one of the world’s 100 most promising start-ups (CNBC) and one to watch by FreightWaves, a key trucking industry publication. But the AI breakthroughs did not appear.
Big social media companies have long wanted to replace human content moderators with AI. COVID-19 quarantines have only intensified that discussion. But AI is far, far from ready to successfully moderate content in an age of where virtual monopolies make single point failure a frequent risk.
Building an AI entails moving parts of our intelligence into a machine. We can do that with rules, (simplified) virtual worlds, statistical learning… We’ll likely create other means as well. But, as long as “no one is home”—that is, the machines lack minds—gaps will remain and those gaps, without human oversight, can put us at risk.
Here’s the dilemma: The Moral Machine (the Trolley Problem, updated) feels necessary because the rules by which we order our lives are useless with automated vehicles. Laws embody principles that we apply. Machines have no mind by which to apply the rules. Instead researchers must train them with millions of examples and hope the machine extracts the correct message…
We might avoid worrying that our artificial intelligence machines are trying to deceive us if we called it “Automated Intelligence rather than “Artificial Intelligence.”
Have you heard of the Law of the Instrument? It just means, to quote one formulation, “He that is good with a hammer tends to think everything is a nail.” All any given problem needs is a good pounding. This is a risk with AI, as with amateur carpentry. But with AI, it can get you into more serious trouble. Take hiring, for instance.
Over time, machine vision will become harder to fool than the one that was recently tricked into rapid acceleration by a defaced sign. But it will still be true that a fooled human makes a better decision than a fooled machine because the fooled human has common sense, awareness, and a mind that reasons.
A free course at Seattle University addresses the “meaning of ethics in AI.” I’ve signed up for it. One concern that I hope will be addressed is: We must not abdicate to machines the very thing that only we can do: Treat other people fairly.