Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence

Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

Archives

The Chatbots’ Most Dangerous Correlations

To give these machines blind trust really is the case of the blind leading the blind, and that is not likely to end well
When you prompt an LLM, the entire conversation acts as a prompt driving the reply. Thus your prompt can push the LLM in unforeseen, unintended directions.

Can an AI Really Develop a Mind of Its Own?

Specifically, can an AI develop a mind with its own goals and desires, capable of plans and strategies — as the authors of If Anyone Builds It believe?
Even if we adopt a materialist view of the mind, I believe I can show why it is not possible for a machine to develop a mind.

Fearing the Terminator, Missing the Obvious

In Part 1 of my review of the new AI Doom book, If Anyone Builds It, Everyone Dies, we look at how the authors first developed the underlying idea
By 2020, authors Yudlowsky and Soares were already Doomers but the rapid success of ChatGPT and similar models heightened their worries.

Is AI really better than physicians at diagnosis?

The British Medical Journal found a serious problem with the studies

Of 83 studies of the performance of the Deep Learning algorithm on diagnostic images, only two had been randomized, as is recommended, to prevent bias in interpretation.

Will the COVID-19 Pandemic Promote Mass Automation?

Caution! Robots don’t file for benefits but that’s not all we need to know about them

I understand the panic many business leaders experience as they try to stay solvent while customers evaporate. Panic, however, is a poor teacher: AI-based automation will not only not solve all their problems, it may very well add to them. AI is not a magic box into which we can stuff them and make them disappear.

Star self-driving truck firm shuts; AI not safe enough soon enough

CEO Stefan Seltz-Axmacher is blunt about the cause: Machine learning “doesn’t live up to the hype”

Starsky Robotics was not just another startup overwhelmed by business realities. In 2019, it was named one of the world’s 100 most promising start-ups (CNBC) and one to watch by FreightWaves, a key trucking industry publication. But the AI breakthroughs did not appear.

AI Is Not Ready to Moderate Content!

In the face of COVID-19 quarantines for human moderators, some look to AI to keep the bad stuff off social media

Big social media companies have long wanted to replace human content moderators with AI. COVID-19 quarantines have only intensified that discussion. But AI is far, far from ready to successfully moderate content in an age of where virtual monopolies make single point failure a frequent risk.

All AI’s Are Psychopaths

We can use them but we can’t trust them with moral decisions. They don’t care why

Building an AI entails moving parts of our intelligence into a machine. We can do that with rules, (simplified) virtual worlds, statistical learning… We’ll likely create other means as well. But, as long as “no one is home”—that is, the machines lack minds—gaps will remain and those gaps, without human oversight, can put us at risk.

The “Moral Machine” Is Bad News for AI Ethics

Despite the recent claims of its defenders, there is no way we can outsource moral decision-making to an automated intelligence

Here’s the dilemma: The Moral Machine (the Trolley Problem, updated) feels necessary because the rules by which we order our lives are useless with automated vehicles. Laws embody principles that we apply. Machines have no mind by which to apply the rules. Instead researchers must train them with millions of examples and hope the machine extracts the correct message… 

Machines Never Lie but Programmers… Sometimes

A creative claim is floating around out there that bad AI results can arise from machine “deception”

We might avoid worrying that our artificial intelligence machines are trying to deceive us if we called it “Automated Intelligence rather than “Artificial Intelligence.”

Are Facial Expressions a Clear, Simple Basis for Hiring Decisions?

Marketing AI to employers to analyze facial expressions ignores the fact that correlation is NOT causation

Have you heard of the Law of the Instrument? It just means, to quote one formulation, “He that is good with a hammer tends to think everything is a nail.” All any given problem needs is a good pounding. This is a risk with AI, as with amateur carpentry. But with AI, it can get you into more serious trouble. Take hiring, for instance.

McAfee: Assisted Driving System Is Easily Fooled

Defacing a road sign caused the system to dramatically accelerate the vehicle

Over time, machine vision will become harder to fool than the one that was recently tricked into rapid acceleration by a defaced sign. But it will still be true that a fooled human makes a better decision than a fooled machine because the fooled human has common sense, awareness, and a mind that reasons.

AI has changed our relationship to our tools

If a self-driving car careens into a storefront, who’s to blame? A new Seattle U course explores ethics in AI

A free course at Seattle University addresses the “meaning of ethics in AI.” I’ve signed up for it. One concern that I hope will be addressed is: We must not abdicate to machines the very thing that only we can do: Treat other people fairly.

Teaching Computers Common Sense Is Very Hard

Those fancy voice interfaces are little more than immense lookup tables guided by complex statistics

Researchers at the Allen Institute for Artificial Intelligence (AI2) published a paper recently, deflating claims of rapid progress toward giving computers common sense.

AI Can Help Spot Cancers—But It’s No Magic Wand

When I spoke last month about how AI can help with cancer diagnoses, I failed to appreciate some of the complexities of medical diagnosis

As a lawyer with medical training reminded us recently, any one image is a snapshot in time, a brief part of the patient’s whole story. And it’s the whole story that matters, not a single image, perhaps taken out of context.

Did the Economist Really “Interview” an AI?

Perhaps they have a private definition of what an interview "is"…

Faced with a claim that an AI language tool had given an interview, I took the advice I gave readers yesterday, and followed the links. What a revelation. The Economist story was more dishonest than the examples that Siegel discussed in Scientific American.

Can The Machine TELL If You Are Psychotic or Gay?

No, and the hype around what machine learning can do is enough to make old-fashioned tabloids sound dull and respectable

Media often co-operate with researchers’ inflated claims about machine learning’s powers of discovery. An ingenious “creative” approach to accuracy enables the misrepresentation, says data analyst Eric Siegel.