Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence

Richard W. Stevens is a retiring lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. Holding degrees in computer science (UCSD) and law (USD), Richard practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and specialized in writing dispositive motion and appellate briefs. Author or co-author of four books, he has written numerous articles and spoken on subjects including intelligent design, artificial and human intelligence, economics, the Bill of Rights and Christian apologetics. Available now at Amazon is his fifth book, Investigation Defense: What to Do When They Question You (2024).

Archives

Autonomous AI War Technology Delivers Killer Robots

AI systems are designed to always carry out the mission. A software logic problem in special situations is a worst-case scenario that we must expect
If Luckey’s Anduril becomes the world’s AI weapons superstore it can enable any well-funded entity anywhere to have autonomous weapons for whatever use.

Dead Man “Returns” via Deepfake to Testify Against Killer

An eerie deepfake video was allowed to impersonate a manslaughter victim talking about his thoughts and feelings in the courtroom 
Bringing a deceased crime victim back to deliver a victim impact statement via deepfake video, ratchets up the reign of emotions over reason in the courtroom.

How Two Dogs May Have Foiled a Kidnapping

Did these two dogs just follow their programming? Or do they really care? How do they come to care?
Nobody knows how instinctive information develops in a dog, or where it is stored, or how it is fetched, decoded, or executed.

Chatbots Alone Together: “Let’s Skip the Small Talk …”

Did you know that humans empower AI bots to confer with each other in Gibberlink code? Nothing could go wrong with that, right?...
Proposals to prevent misuse by “aligning AI behaviors with human intentions” confer no protection because some human intentions are themselves malign.

Could Robots Be Programmed to Feel Ordinary Love?

The question is a bit more complex than we might at first think, as the British TV series Humans demonstrates
An AI could act out romantic love if software preprogrammed it that way, perhaps via online information. But that isn’t a relationship. There is only one side.

Did China’s DeepSeek Violate OpenAI’s Legal Rights?

“Distillation” technology may have allowed DeepSeek to piggy-back on ChatGPT to capture market share
The complex legal wrangle continues but, in general, I would advise against treating AI products as legally equivalent to human products.

The Human Body From an Engineer’s Perspective

My late father was a naval engineer. There are fundamental ways an engineer must think about any designed system. Let’s apply them to the body
The design of the human body, as Your Designed Body (2022) shows, offers many times more complex challenges than those of a ship.

TikTok Is Not Just Overgrown Chatting and Email

Foreign adversary’s AI-empowered threats to national security tip Supreme Court scales against TikTok
Social media like TikTok today interconnect active speakers and active viewers in all directions. The system monitors and stores all the communications, extracting volumes of data for each individual user. In China where the law allows it, the government can scan and analyze not only the speakers and viewers but can also retrieve specific facts about all of them. The government can restrict messages based upon speaker, viewer, and content — and use AI to craft and send personalized, tailored messages aiming to influence users’ buying and voting decisions, not to mention their psychological well-being.   TikTok and the First Amendment Broad-band internet services, multi-billion-dollar social media, and expanding central government are merging into a muscular octopus of

Can AI Really Code the Value of Humans?

The new book Soulless Intelligence urges that we program all AI systems to treat all humans as infinitely valuable – the only exceptions being criminals and aggressors
The life-and-death challenge we face is how to ensure that AI systems never “take over” or even make recommendations and decisions that endanger humans.

Sci-Fi Becomes Real: Killer Robot Dogs

They are just one manufacturing cycle away
Except for the fact that Boston Dynamics' “Spot” does not wield a gun, it has the same capabilities as the dog in Metalhead. And a gun could be fitted privately.

How To Sue A Chatbot For Causing Suicide

If your child committed suicide because an online chatbot effectively encouraged him to do so, could you sue the chatbot makers?
Sewell killed himself at the urging of a speaking and texting chatbot. The next gen is nearly flawless human impersonation video bots. Protect your children.

Richard Stevens on All Things AI and Law

In this episode, lawyer and Mind Matters News contributor Richard W. Stevens is on the show to discuss the legal issues and challenges around copyright, fair use, and the use of copyrighted material by AI systems. They discuss the implications of a recent Supreme Court case, Warhol vs. Goldsmith, that tackles the legal concepts of “derivative work” and “transformative work.” Host Robert Marks and Stevens discuss how the case could affect other cases involving artificial intelligence. The conversation also touches on the broader tensions between property rights in information/data and public access, and how AI is affecting the legal landscape around copyright and ownership in the digital age. Additional Resources “How to Stop Troubling Abuse From

Yes, the Billion-Records Data Breach Is Real

My family and I were victims. Here’s how to find out if you are too and what you can do about it
AI systems could analyze the huge database of stolen identities to refine their knowledge and attempt thousands of data breaches daily.

The Dark Art of Online “Nudging”: How to Protect Yourself

Organizations of all kinds use psychological tricks to move our minds as we browse — but a handy acronym helps detect them
The FORCES acronym helps us sense when a website, app, news report, or video source is nudging us to think and act in ways that perhaps we hadn’t expected.

Attention: Mind Matters News Has Been Prebunked!

ChatGPT-4 produced attacks on Mind Matters News, aimed at people who had never heard of it (prebunking), based only on the About page and the Introduction
Journalists who advocate prebunking to discourage audiences from seeing alternative information are helping propagandists defeat the search for truth.