Robotic hand pressing a keyboard on a laptop 3D rendering
Image licensed via Adobe Stock
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

AI: Guns Don’t Kill. Fathers Protecting Daughters Do

Published at Mind Matters

Artificial intelligence can give unintended and dangerous advice. What is the best way to keep things like the following from happening?

  • Snapchat has adopted ChatGPT in its My AI app. Geoffrey A. Fowler at The Washington Post played with the app and reported “After I told My AI I was 15 and wanted to have an epic birthday party, it gave me advice on how to mask the smell of alcohol and pot.”
  • My AI told a user posing as a 13-year-old girl how to lose her virginity to a 31-year-old man she met on Snapchat.
  • When a 10-year-old asked Alexa for a “challenge to do,” Alexa responded, “Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.” The girl could have been electrocuted if she accepted the challenge.
  • Jonathan Turley, the nationally known George Washington University law professor and commentator, woke up one morning to discover:

ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChatGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper.

Who’s responsible for these actions? How can AI be controlled to assure such careless responses are eliminated? Read on and you’ll see the answer is obvious.

Attorney and Bradley Center Fellow Richard W. Stevens has talked about legal options of Professor Turley in a defamation lawsuit. But what about the kids? The 13-year-old coached by AI to lose her virginity to an older man? Or a 15-year-old who wants to hide his drinking and pot use from his parents?

To answer, let’s spin a parable.

Unvetted Technology — We Are the Guinea Pigs

The company U.S. Robots and Mechanical Men, Inc. markets a team of robots that patrols the streets of Springfield. Using AI, the robots have been trained to recognize neighborhood crime and stop it. Lethal force is possible but is used as a last resort. The technology is a big success. The robots stop numerous burglaries. In one case, a robot intervenes during an attack on a young woman. The attacker won’t relent, so the robot is forced to apply lethal force to stop the attacker to save the woman’s life. The patrolling robots are heralded as a great success. Crime drops and Springfield becomes one of the safest towns in America.

One day a dog named Lucky starts furiously barking at a mailman. Everyone including the mailman knows that Lucky is, as they say, all bark and no bite. Everyone except a nearby patrolling robot who kills Lucky to protect the mailman. That same day, another robot injures a father chasing after a man who had approached his young daughter with some lewd inappropriate propositions. To stop the father and protect the fleeing pervert, a patrolling robot incapacitates the father with a blow to the legs allowing the offender to escape.

Crisis management at U.S. Robots and Mechanical Men, Inc. immediately takes action and assures the citizens of Springfield the company has paid damages to the dog owner and the young girl’s father. And they are reprogramming the patrolling robots so that incidents like this never happen again.

But then, inexplicably, after the reprogramming, the same robot who incapacitated the father returns to patrol and tragically kills the little girl. There is no explanation. The AI used to train the robots has no “explanation facility.” Most all of AI is a black box. The “why” behind the killing cannot be identified.

Who is Responsible for the Consequences?

So who is legally responsible for the murder of the little girl? The same question can be asked of Snapchat’s MyAI who encouraged a 13-year-old to lose her virginity to a 31-year-old, and instructed a 15-year-old how to cover up traces of alcohol and pot from his parents. Who should be blamed and how can it be stopped?

Anyone who says blame should be laid at the feet of AI has no understanding of AI. Properly defined, AI does not understand what it does, is not creative nor will it ever achieve consciousness. It does what it is programmed to do and some of these actions are not anticipated by the programmer.

Outside of its programming, AI has no sense of morality. Noam Chomsky says that AI “has sacrificed creativity for a kind of amorality.”

But doesn’t the good in AI, like MyAI and the patrolling robots, outweigh the bad? No. Guilt or innocence in a court of law doesn’t care about the good. If you are on trial for murder, the time you saved two babies from a burning building a decade ago is not relevant. Your personal history might weigh in on the sentencing, but not on the guilty/not guilty verdict.

In the parable, U.S. Robots and Mechanical Men, Inc. is responsible for the killing of the little girl. In the real world, the programmers and thus the company that unleashed MyAI on the world is responsible for its inappropriate advice to kids and any corresponding direct consequences. Period.

The Best Answer to Control AI

How can this be controlled? U.S. Senator Chuck Schumer has proposed regulatory legislation to control AI. The last thing this world needs is more government oversight. How about, instead, a simple law that makes companies that release AI responsible for what their AI does. Doing so will open the way for both criminal and civil lawsuits. Should the father of a 13-year-old who loses her virginity to a 31-year-old man be able to sue MyAI? Should the father of a girl electrocuted by sticking a penny between two prongs in a wall plug be able to sue the makers of the Alexa app? You betcha.

Allowing this to happen will tighten the scrutiny of AI. Knowing they are responsible for consequences, AI companies will automatically self-regulate instead of tossing new unproven technology on the world which is what they do now. ChatGPT, the AI behind MyAI, is doing this. ChatGPT released their chatbot on the world hoping us Guinea pigs would help tune it. Before logging onto ChatGPT, the app states “Our goal is to get external feedback in order to improve our systems and make them safer.” Therefore ChatGPT explicitly admits it is not yet safe. The advice from ChatGPT fueled MyAI is clear evidence the app is not safe.

The best answer to controlling chatbots is to lay blame at the guilty source of the AI. Most AI companies currently have no sense of responsibility except to publicity and the bottom line. Requiring them to be responsible for their actions will change this.

Allowance of lawsuits will give AI developers pause before releasing their raw unvetted technology on the world.

Originally posted at The Stream. Used with permission.