Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence

Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. He holds a J.D. with high honors from the University of San Diego Law School and a computer science degree from UC San Diego. Richard has practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and now specializes in writing dispositive motion and appellate briefs. He has authored or co-authored four books, and has written numerous articles and spoken on subjects including legal writing, economics, the Bill of Rights and Christian apologetics. His fifth book, Investigation Defense, is forthcoming.

Archives

Cyber Plagiarism: When AI Systems Snatch Your Copyrighted Images

Outright copying of others’ images may put system’s owners in legal jeopardy. Let's look at U.S. legal decisions
The AI companies offering the image-creating services need Robot from Lost in Space in their legal departments waving its arms, crying out: “Warning! Danger!”

Human Impersonation AI Must Be Outlawed

I didn't used to think that AI systems could threaten civilization. Now I do.
(Previous version first appeared at theepochtimes.com on 1/12/2024) In 15 minutes of techno-evolutionary time, artificial intelligence-powered systems will threaten our civilization. Yesterday, I didn’t think so; I do now. Here’s how it will happen. The AI Human Impersonation Danger Recall how in 2023 criminals used an artificial intelligence (AI) system to phone an Arizona mother and say they were holding her daughter for ransom. The AI system mimicked perfectly her daughter’s voice down to the word choices and sobs. The terrified mom found her daughter safe at home, only then to determine the call was a scam. That crime showed the power of AI audio alone to deceive and defraud people. Today, Mom gets a text message demanding a ransom, threatening that the caller

Facebook and Instagram Allegedly Hook Youngsters with Dopamine Triggering Tactics

“Social media use can negatively affect teens, distracting them, disrupting their sleep, and exposing them to bullying, rumor spreading, unrealistic views of other people’s lives and peer pressure,” according to the Mayo Clinic. Teens and younger children accessing social media repeatedly or for long periods face heightened risks of mental health problems, including depression, anxiety, social isolation, negative body image, decreased learning ability, even serious thoughts of suicide. Social media that lures kids into excessive use must come from somewhere. Top on the list is the 800-billion-dollar multinational conglomerate, Meta Platforms, Inc. (“Meta”), owner and operator of the social media platforms Facebook and Instagram.  To hold Meta accountable for social

Can Artificial Intelligence Hold Copyright or Patents?

Should AI get legal credit for what it generates? On this episode of Mind Matters from the archive, host Robert J. Marks welcomes attorney and author Richard Stevens to discuss the concept of legal neutrality for artificial intelligence (AI) and its implications for copyright and patent law. Stevens explains that AI is a tool created and controlled by humans, and therefore should not be granted legal personhood or special treatment under the law. He argues that AI-generated works should be treated the same as works created by humans, and that the focus should be on the expression of ideas rather than the process by which they were created. Stevens also addresses the issue of copyright infringement and the challenges of proving originality and independent creation in cases involving

You Can’t Always Be Happy

Our dopamine system both excites and tames pleasure
Humans cannot achieve permanent happiness. Earthly pleasures do not ultimately satisfy us. The Bible said it. The neuroscientists have proved it. A non-stop pleasure-filled life is not possible. Death alone does not end human pleasure — the brain does. Research about dopamine explains why. Dopamine is a molecule, a neurotransmitter that carries information between neurons in the brain. Sometimes called “the feel-good neurotransmitter,” dopamine energizes our mood, motivation, and attention. It helps us think and plan, and especially to strive, focus, and find things interesting. The Ups and Downs of Dopamine So, if our brain produces high dopamine levels, then we are happy as long as they remain high, right? Actually, no. Dr. Anna Lembke in her 2021 book, Dopamine

Night Shift: The Brain’s Extraordinary Work While Asleep

Lie down, close your eyes, lose consciousness, and the brain undertakes the heavy lifting that sleep demands.
What is consciousness? “Consciousness is what allows you to think, remember, and feel things.” It includes awareness of yourself. Descartes’ famous line. “I think, therefore I am,” declared his consciousness. Conscious thinking means our brains, our minds, are sensing, observing, memorizing, recalling, decoding, analyzing, calculating, interrelating, cross-referencing, rearranging, expanding, generalizing, communicating, and even creating. Those coordinated operations, part of cognition, require real work. After all that brain work, it should be time for a rest, right? Nope. When a supermarket closes, the workers don’t just switch off the lights and go home. Overnight the workers clean, restock, organize, repair, and get the store ready for the next day. It’s the same

Congress Boosts AI-Enabled Automobile “Kill Switch” Technology to Control Drivers

Federal agency power poised to extend to your every move.
Next thing you know, you’ll be sitting in the driver’s seat, when Siri or Alexa informs you: “Sorry, you may not drive. This vehicle is temporarily disabled. Please try again later.” There is no override, no “lost password” feature to bypass the lockdown. It won’t matter where you were going, nor how urgently you needed to go. The AI-powered system decides you are not fit to drive. Yet another dystopian fantasy? Hardly. Congress and the President enacted Public Law 117–58 (Nov. 15, 2021) requiring national rules to require passenger vehicles “to be equipped with advanced drunk and impaired driving prevention technology.” Say it that way, and who could speak against the idea?  After all, AI systems would be saving lives. Continued funding approved by

Lawsuit Champions Human Creativity Over AI Mimicry

Copyright laws can protect against sophisticated plagiarism.
Is it possible to violate the copyright on a written work without actually copying the exact words in it?  Yes. And that fact points up how ChatGPT can trample human authors’ rights to their creative work products.   The previous article, Authors Guild Sues OpenAI for Unlawful Copying of Creative Works, described the lawsuit filed by The Authors Guild and many individual writers against OpenAI (and related defendants) for having taught ChatGPT how to copy the writers’ articles and books and then to generate “derivative works.”  The lawsuit first charges that OpenAI made unauthorized copies of billions of words of text, including likely thousands of entire books and articles, to use as training materials for ChatGPT. Making such copies would ordinarily violate the

Authors Guild Sues OpenAI For Unlawful Copying of Creative Works

Did ChatGPT make physical copies of copyrighted books and articles?
How to easily violate a written work’s copyright protection: Make a duplicate copy of it. A photocopy will do. Fishing for a criminal copyright infringement prosecution? Make many copies and sell them. Word-for-word copying of an entire book or article is an obvious violation. Copying significant parts usually violates the law as well. There are various exceptions to the rule, but those are the easy cases. Spotlight 2023. The Authors Guild and other creative writers are suing OpenAI (and related entities) for teaching ChatGPT how to copy the writers’ articles and books and then generate “derivative works,” i.e., “material that is based on, mimics, summarizes, or paraphrases” professional writers’ works and harms the market for them. ChatGPT can create sequels or

Inside the Mind of a Rock ‘n’ Roll Drummer

Delving into the thrilling, demanding world of professional drumming and the mind-body communication it requires
Real drumming means non-stop, real-time, dynamic decisions and actions using complex information deployed via physical sticks and targets.

How a Toddler in a Toy Store Refutes Materialism

This everyday observation yields insight into a fundamental truth
I’m a magnet for materialists. I often get into discussions with people who tell me that the universe is nothing but matter and energy. These folks believe in materialism. They say I’m nutty and wrong to think there is anything else. Something like: “Silly theist! Gods are for kids!” Let’s follow that thought. A grandparent of 11 humans, I’ve journeyed with their parents through the young ones’ toddlerhood many times. There’s a lot to learn about reality from toddlers’ learning and growing. It leads to understanding Toddler Truth. Take a toddler to a game arcade, a toy store, or another kid’s house to play. There’s one thing you can count on hearing: “I want that!” We parents start tuning out those requests, since they are so frequent. Toddlers learn the

Postmodernism’s Steady Deconstruction of Reality

How can we find truth when nothing is reliable?
Sometimes, you just have to try using college professors’ ideas in the real world. One such idea is “postmodernism.” Applied to communications, postmodernism teaches that whenever we read a written text, we should not try to discover what the writer intended. Instead of looking for an objective “meaning,” we should experience what the text means to us personally. The idea goes further, urging us to start by disbelieving the text and doubting our interpretations of it, too. People with the postmodern “deconstructionist” view say, “every text deconstructs” itself, and “every text has contradictions.” Deconstruction means “uncovering the question behind the answers already provided in the text.” Standing upon the ideas of the deconstructionist guru, Jacques

Making Sense of the Warhol v. Goldsmith Supreme Court Case

Lawyer Richard W. Stevens sheds light on a recent groundbreaking court case that has implications for generative AI and copyright issues
Here is an excerpt of the transcript from a recent Mind Matters podcast episode, which you can listen to in full here. Lawyer and Walter Bradley Center Fellow Richard W. Stevens sat down with Robert J. Marks to discuss a Supreme Court Case regarding AI and copyright issues. Stevens helps us understand more of what the case is about and what’s at stake. For more on this, read about the court case’s conclusion here, as well as Marks’s commentary from Newsmax. Richard Stevens: So to boil this down, the situation was this. A woman by the name of Lynn Goldsmith, a professional photographer, took a photo of the musician named Prince. Later, Andy Warhol was paid to produce an orange silkscreen portrait of Prince. And Andy Warhol made 16 different versions of this portrait

AI Libel and Responsibility 

What happens when ChatGPT doesn’t just generate false information but also slanderous and potentially harmful responses? And in legal matters, who is responsible for AI? Robert J. Marks and legal expert Richard W. Stevens discuss these topics and more in this week’s podcast episode. Additional Resources Robert J. Marks at Discovery.org Richard W. Stevens at Discovery.org CAN PROFESSOR TURLEY SUE CHATGPT FOR LIBEL? by Richard W. Stevens How to Stop Troubling Abuse From Artificial Intelligence by Robert J. Marks Let’s Apply Existing Laws to Regulate AI  by Richard W. Stevens Artificial Intelligence, Artificial Wisdom: What Manner of Harms are We Creating? by  Tom Gilson Intellectual Property Laminated Study Guide by Richard W.

AI and Intellectual Property 

The question of copyright and “fair use” is a contentious debate in the age of AI. Is AI-generated art a kind of theft? What about artists’ rights? Attorney and Bradley Center Senior Fellow Richard Stevens discusses the legalities of copyright and the challenge of artificial intelligence in today’s increasingly complicated world.  Additional Resources Robert J. Marks at Discovery.org Richard W. Stevens at Discovery.org Intellectual Property Laminated Study Guide by Richard W. Stevens Podcast

Lawyer Hammered for Using ChatGPT

Court record system proceeded to block access to sloppy lawyering and AI catastrophe
New York Times reporters watched the hearing in federal district court in New York on June 8, 2023, which they then described: In a cringe-inducing court hearing, a lawyer who relied on A.I. to craft a motion full of made-up case law said he “did not comprehend” that could lead him astray. Lawyer Who Used ChatGPT Faces Penalty for Made Up Citations – The New York Times (nytimes.com) The reporters got most of it right but even they erred. The lawyer involved did not write a “motion,” he filed a sworn declaration opposing a motion to dismiss. The difference matters: Declarations are under oath, so the lawyer swore to the truth of ChatGPT lies. Looking at the actual court file documents reveals the situation is even worse. Although the federal judge detected the

Let’s Apply Existing Laws to Regulate AI

No revolutionary laws needed to fight harmful bots
In a recent article, Professor Robert J. Marks reported how artificial intelligence (AI) systems had made false reports or gave dangerous advice: The Snapchat ChatGPT-powered AI feature “told a user posing as a 13-year-old girl how to lose her virginity to a 31-year-old man she met on Snapchat.” Snapchat’s ChatGPT reportedly advised a user posing as age 15 how to have an “epic birthday party” by giving “advice on how to mask the smell of alcohol and pot.” When a 10-year-old child asked Amazon’s Alexa for a “challenge to do,” Alexa reportedly suggested: “Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.” Jonathan Turley, the nationally known George Washington University law professor and commentator,

Panic Propaganda Pushes Surrender to AI-Enhanced Power

The hype over AI's significance makes us more vulnerable to it
Can you believe it? USA Today, the national news outlet, on May 4, 2023, declared (italics added): It’s the end of the world as we know it: ‘Godfather of AI’ warns nation of trouble ahead. Before digging out and playing your 1987 REM album, ask yourself: Is this headline true – and what do we do now?  The USA Today article mitigates the doom timeframe from imminent to someday in paragraph one (italics added): One of the world’s foremost architects of artificial intelligence warned Wednesday that unexpectedly rapid advances in AI – including its ability to learn simple reasoning – suggest it could someday take over the world and push humanity toward extinction. Within a day, the Arizona Republic ran an opinion piece headlined “‘Godfather of AI’

20 Ways AI Enables Criminals

If you cannot believe your eyes and ears, then how can you protect yourself and your family from crime?
As reported recently and relayed in this publication, a mom in Arizona described how criminals called her to say they were holding her daughter for ransom and used artificial intelligence (AI) to mimic perfectly her daughter’s voice down to the word choices and sobs. Only because the mom found her daughter safe in her home could she know the call was a scam. Meanwhile, despite efforts to limit ChatGPT’s excursions into the dark side of human perversity, the wildly famous bot can be persuaded to discuss details of sordid sexuality. In one experiment with Snapchat’s MyAI chatbot, an adult pretending to be a 13-year-old girl asked for advice about having sex for the first time – in a conversation in which “she” said she was in a relationship with a 31-year-old man. The

Can Professor Turley Sue ChatGPT for Libel?

The world wide web of reputation destruction is here
Isn’t there a law against falsely accusing people of serious crimes or misconduct and then publishing damaging lies to the world? Yes. For centuries in English-speaking countries, the victim of such lies could sue the false accuser in civil court for libel per se. Nowadays, libel and its oral statement cousin, slander, are grouped together as defamation. Under American law, it isn’t easy to bring and win a lawsuit even when your case seems strong, but at least the law provides some recourse for defamation. How about when the false accuser is ChatGPT? Jonathan Turley, the nationally known George Washington University law professor and commentator, woke up one morning to discover: ChatGPT falsely reported on a claim of sexual harassment that was never made