Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence

Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. He holds a J.D. with high honors from the University of San Diego Law School and a computer science degree from UC San Diego. Richard has practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and now specializes in writing dispositive motion and appellate briefs. He has authored or co-authored four books, and has written numerous articles and spoken on subjects including legal writing, economics, the Bill of Rights and Christian apologetics. His fifth book, Investigation Defense, is forthcoming.

Archives

Are Chatbots Biased? The Research Results Are In

The results are obvious and dramatic. Inject the preferred training materials and the chatbot will “believe” whatever the post-trainer intended
People have noticed political biases in artificial intelligence (AI) chatbot systems like ChatGPT, but researcher David Rozado studied 24 large language model (LLM) chatbots to find out. Rozado’s (preprint) paper, “The Political Preferences of LLMs,” delivers open access findings from very recent research, and declares: When probed with questions/statements with political connotations, most conversational LLMs tend to generate responses that are diagnosed by most political test instruments as manifesting preferences for left-of-center viewpoints. The Chatbots’ Landslide of Opinion As reported in the New York Times, the paper restates that “most modern conversational LLMs when probed with questions with political connotations tend to generate answers that are …

Cyber Plagiarism: When AI Systems Snatch Your Copyrighted Images

Outright copying of others’ images may put system’s owners in legal jeopardy. Let's look at U.S. legal decisions
The AI companies offering the image-creating services need Robot from Lost in Space in their legal departments waving its arms, crying out: “Warning! Danger!”

Human Impersonation AI Must Be Outlawed

I didn't used to think that AI systems could threaten civilization. Now I do.
It must be declared a serious felony, akin to attempted mass murder, to produce, sell, possess, or use any AI-powered human impersonation system.

Can Artificial Intelligence Hold Copyright or Patents?

Should AI get legal credit for what it generates? On this episode of Mind Matters from the archive, host Robert J. Marks welcomes attorney and author Richard Stevens to discuss the concept of legal neutrality for artificial intelligence (AI) and its implications for copyright and patent law. Stevens explains that AI is a tool created and controlled by humans, and therefore should not be granted legal personhood or special treatment under the law. He argues that AI-generated works should be treated the same as works created by humans, and that the focus should be on the expression of ideas rather than the process by which they were created. Stevens also addresses the issue of copyright infringement and the challenges of proving originality and independent creation in cases involving

You Can’t Always Be Happy

Our dopamine system both excites and tames pleasure
Humans cannot achieve permanent happiness. Earthly pleasures do not ultimately satisfy us. The Bible said it. The neuroscientists have proved it.

Night Shift: The Brain’s Extraordinary Work While Asleep

Lie down, close your eyes, lose consciousness, and the brain undertakes the heavy lifting that sleep demands.
Sleep deprivation and sleep interruptions such as occur with sleep apnea are not mere annoyances but actually damage a whole array of functions.

Inside the Mind of a Rock ‘n’ Roll Drummer

Delving into the thrilling, demanding world of professional drumming and the mind-body communication it requires
Real drumming means non-stop, real-time, dynamic decisions and actions using complex information deployed via physical sticks and targets.

How a Toddler in a Toy Store Refutes Materialism

This everyday observation yields insight into a fundamental truth
I’m a magnet for materialists. I often get into discussions with people who tell me that the universe is nothing but matter and energy. These folks believe in materialism. They say I’m nutty and wrong to think there is anything else. Something like: “Silly theist! Gods are for kids!” Let’s follow that thought. A grandparent of 11 humans, I’ve journeyed with their parents through the young ones’ toddlerhood many times. There’s a lot to learn about reality from toddlers’ learning and growing. It leads to understanding Toddler Truth. Take a toddler to a game arcade, a toy store, or another kid’s house to play. There’s one thing you can count on hearing: “I want that!” We parents start tuning out those requests, since they are so frequent. Toddlers learn the

Postmodernism’s Steady Deconstruction of Reality

How can we find truth when nothing is reliable?
Sometimes, you just have to try using college professors’ ideas in the real world. One such idea is “postmodernism.” Applied to communications, postmodernism teaches that whenever we read a written text, we should not try to discover what the writer intended. Instead of looking for an objective “meaning,” we should experience what the text means to us personally. The idea goes further, urging us to start by disbelieving the text and doubting our interpretations of it, too. People with the postmodern “deconstructionist” view say, “every text deconstructs” itself, and “every text has contradictions.” Deconstruction means “uncovering the question behind the answers already provided in the text.” Standing upon the ideas of the deconstructionist guru, Jacques

Making Sense of the Warhol v. Goldsmith Supreme Court Case

Lawyer Richard W. Stevens sheds light on a recent groundbreaking court case that has implications for generative AI and copyright issues
Here is an excerpt of the transcript from a recent Mind Matters podcast episode, which you can listen to in full here. Lawyer and Walter Bradley Center Fellow Richard W. Stevens sat down with Robert J. Marks to discuss a Supreme Court Case regarding AI and copyright issues. Stevens helps us understand more of what the case is about and what’s at stake. For more on this, read about the court case’s conclusion here, as well as Marks’s commentary from Newsmax. Richard Stevens: So to boil this down, the situation was this. A woman by the name of Lynn Goldsmith, a professional photographer, took a photo of the musician named Prince. Later, Andy Warhol was paid to produce an orange silkscreen portrait of Prince. And Andy Warhol made 16 different versions of this portrait

AI Libel and Responsibility 

What happens when ChatGPT doesn’t just generate false information but also slanderous and potentially harmful responses? And in legal matters, who is responsible for AI? Robert J. Marks and legal expert Richard W. Stevens discuss these topics and more in this week’s podcast episode. Additional Resources Robert J. Marks at Discovery.org Richard W. Stevens at Discovery.org CAN PROFESSOR TURLEY SUE CHATGPT FOR LIBEL? by Richard W. Stevens How to Stop Troubling Abuse From Artificial Intelligence by Robert J. Marks Let’s Apply Existing Laws to Regulate AI  by Richard W. Stevens Artificial Intelligence, Artificial Wisdom: What Manner of Harms are We Creating? by  Tom Gilson Intellectual Property Laminated Study Guide by Richard W.

AI and Intellectual Property 

The question of copyright and “fair use” is a contentious debate in the age of AI. Is AI-generated art a kind of theft? What about artists’ rights? Attorney and Bradley Center Senior Fellow Richard Stevens discusses the legalities of copyright and the challenge of artificial intelligence in today’s increasingly complicated world.  Additional Resources Robert J. Marks at Discovery.org Richard W. Stevens at Discovery.org Intellectual Property Laminated Study Guide by Richard W. Stevens Podcast

Lawyer Hammered for Using ChatGPT

Court record system proceeded to block access to sloppy lawyering and AI catastrophe
New York Times reporters watched the hearing in federal district court in New York on June 8, 2023, which they then described: In a cringe-inducing court hearing, a lawyer who relied on A.I. to craft a motion full of made-up case law said he “did not comprehend” that could lead him astray. Lawyer Who Used ChatGPT Faces Penalty for Made Up Citations – The New York Times (nytimes.com) The reporters got most of it right but even they erred. The lawyer involved did not write a “motion,” he filed a sworn declaration opposing a motion to dismiss. The difference matters: Declarations are under oath, so the lawyer swore to the truth of ChatGPT lies. Looking at the actual court file documents reveals the situation is even worse. Although the federal judge detected the

Let’s Apply Existing Laws to Regulate AI

No revolutionary laws needed to fight harmful bots
In a recent article, Professor Robert J. Marks reported how artificial intelligence (AI) systems had made false reports or gave dangerous advice: The Snapchat ChatGPT-powered AI feature “told a user posing as a 13-year-old girl how to lose her virginity to a 31-year-old man she met on Snapchat.” Snapchat’s ChatGPT reportedly advised a user posing as age 15 how to have an “epic birthday party” by giving “advice on how to mask the smell of alcohol and pot.” When a 10-year-old child asked Amazon’s Alexa for a “challenge to do,” Alexa reportedly suggested: “Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.” Jonathan Turley, the nationally known George Washington University law professor and commentator,

Panic Propaganda Pushes Surrender to AI-Enhanced Power

The hype over AI's significance makes us more vulnerable to it
Can you believe it? USA Today, the national news outlet, on May 4, 2023, declared (italics added): It’s the end of the world as we know it: ‘Godfather of AI’ warns nation of trouble ahead. Before digging out and playing your 1987 REM album, ask yourself: Is this headline true – and what do we do now?  The USA Today article mitigates the doom timeframe from imminent to someday in paragraph one (italics added): One of the world’s foremost architects of artificial intelligence warned Wednesday that unexpectedly rapid advances in AI – including its ability to learn simple reasoning – suggest it could someday take over the world and push humanity toward extinction. Within a day, the Arizona Republic ran an opinion piece headlined “‘Godfather of AI’