Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence

Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. He holds a J.D. with high honors from the University of San Diego Law School and a computer science degree from UC San Diego. Richard has practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and now specializes in writing dispositive motion and appellate briefs. He has authored or co-authored four books, and has written numerous articles and spoken on subjects including legal writing, economics, the Bill of Rights and Christian apologetics. His fifth book, Investigation Defense, is forthcoming.

Archives

Inside the Mind of a Rock ‘n’ Roll Drummer

Delving into the thrilling, demanding world of professional drumming and the mind-body communication it requires
After talking all about artificial intelligence (AI), ChatGPT, and the legal rights of robots, let’s Take Five. Time to follow Rod Serling’s Twilight Zone path and travel to another dimension, of sight, of sound, and of mind. Cue up the vinyl or the mp3s, it’s time to explore rock ‘n’ roll music from the inside. What practically defines rock ‘n’ roll?  Chuck Berry said it was the “back beat” – the prominent rhythm on beats 2 and 4. It’s the beat you can’t lose, as The Beatles agreed.  Huey Lewis and the News nailed it: “The heart of rock and roll is the beat.” Where does the beat come from, the rhythm that defines rock n roll? Not often the Read More ›

How a Toddler in a Toy Store Refutes Materialism

This everyday observation yields insight into a fundamental truth
I’m a magnet for materialists. I often get into discussions with people who tell me that the universe is nothing but matter and energy. These folks believe in materialism. They say I’m nutty and wrong to think there is anything else. Something like: “Silly theist! Gods are for kids!” Let’s follow that thought. A grandparent of 11 humans, I’ve journeyed with their parents through the young ones’ toddlerhood many times. There’s a lot to learn about reality from toddlers’ learning and growing. It leads to understanding Toddler Truth. Take a toddler to a game arcade, a toy store, or another kid’s house to play. There’s one thing you can count on hearing: “I want that!” We parents start tuning out Read More ›

Postmodernism’s Steady Deconstruction of Reality

How can we find truth when nothing is reliable?
Sometimes, you just have to try using college professors’ ideas in the real world. One such idea is “postmodernism.” Applied to communications, postmodernism teaches that whenever we read a written text, we should not try to discover what the writer intended. Instead of looking for an objective “meaning,” we should experience what the text means to us personally. The idea goes further, urging us to start by disbelieving the text and doubting our interpretations of it, too. People with the postmodern “deconstructionist” view say, “every text deconstructs” itself, and “every text has contradictions.” Deconstruction means “uncovering the question behind the answers already provided in the text.” Standing upon the ideas of the deconstructionist guru, Jacques Derrida, and his followers, one Read More ›

Making Sense of the Warhol v. Goldsmith Supreme Court Case

Lawyer Richard W. Stevens sheds light on a recent groundbreaking court case that has implications for generative AI and copyright issues
Here is an excerpt of the transcript from a recent Mind Matters podcast episode, which you can listen to in full here. Lawyer and Walter Bradley Center Fellow Richard W. Stevens sat down with Robert J. Marks to discuss a Supreme Court Case regarding AI and copyright issues. Stevens helps us understand more of what the case is about and what’s at stake. For more on this, read about the court case’s conclusion here, as well as Marks’s commentary from Newsmax. Richard Stevens: So to boil this down, the situation was this. A woman by the name of Lynn Goldsmith, a professional photographer, took a photo of the musician named Prince. Later, Andy Warhol was paid to produce an orange Read More ›

AI Libel and Responsibility 

What happens when ChatGPT doesn’t just generate false information but also slanderous and potentially harmful responses? And in legal matters, who is responsible for AI? Robert J. Marks and legal expert Richard W. Stevens discuss these topics and more in this week’s podcast episode. Additional Resources

AI and Intellectual Property 

The question of copyright and “fair use” is a contentious debate in the age of AI. Is AI-generated art a kind of theft? What about artists’ rights? Attorney and Bradley Center Senior Fellow Richard Stevens discusses the legalities of copyright and the challenge of artificial intelligence in today’s increasingly complicated world.  Additional Resources

Lawyer Hammered for Using ChatGPT

Court record system proceeded to block access to sloppy lawyering and AI catastrophe
New York Times reporters watched the hearing in federal district court in New York on June 8, 2023, which they then described: In a cringe-inducing court hearing, a lawyer who relied on A.I. to craft a motion full of made-up case law said he “did not comprehend” that [ChatGPT] could lead him astray. Lawyer Who Used ChatGPT Faces Penalty for Made Up Citations – The New York Times (nytimes.com) The reporters got most of it right but even they erred. The lawyer involved did not write a “motion,” he filed a sworn declaration opposing a motion to dismiss. The difference matters: Declarations are under oath, so the lawyer swore to the truth of ChatGPT lies. Looking at the actual court Read More ›

Let’s Apply Existing Laws to Regulate AI

No revolutionary laws needed to fight harmful bots
In a recent article, Professor Robert J. Marks reported how artificial intelligence (AI) systems had made false reports or gave dangerous advice: Prof. Marks suggested that instead of having government grow even bigger trying to “regulate” AI systems such as ChatGPT: How about, instead, a simple law that makes companies that release AI responsible for what their AI does? Doing so will open the way for both criminal and civil lawsuits. Strict Liability for AI-Caused Harms Prof. Marks has a point. Making AI-producing companies responsible for their software’s actions is feasible using two existing legal ideas. The best known such concept is strict liability. Under general American law, strict liability exists when a defendant is liable for committing an action Read More ›

Panic Propaganda Pushes Surrender to AI-Enhanced Power

The hype over AI's significance makes us more vulnerable to it
Can you believe it? USA Today, the national news outlet, on May 4, 2023, declared (italics added): It’s the end of the world as we know it: ‘Godfather of AI’ warns nation of trouble ahead. Before digging out and playing your 1987 REM album, ask yourself: Is this headline true – and what do we do now?  The USA Today article mitigates the doom timeframe from imminent to someday in paragraph one (italics added): One of the world’s foremost architects of artificial intelligence warned Wednesday that unexpectedly rapid advances in AI – including its ability to learn simple reasoning – suggest it could someday take over the world and push humanity toward extinction. Within a day, the Arizona Republic ran Read More ›

20 Ways AI Enables Criminals

If you cannot believe your eyes and ears, then how can you protect yourself and your family from crime?
As reported recently and relayed in this publication, a mom in Arizona described how criminals called her to say they were holding her daughter for ransom and used artificial intelligence (AI) to mimic perfectly her daughter’s voice down to the word choices and sobs. Only because the mom found her daughter safe in her home could she know the call was a scam. Meanwhile, despite efforts to limit ChatGPT’s excursions into the dark side of human perversity, the wildly famous bot can be persuaded to discuss details of sordid sexuality. In one experiment with Snapchat’s MyAI chatbot, an adult pretending to be a 13-year-old girl asked for advice about having sex for the first time – in a conversation in Read More ›

Can Professor Turley Sue ChatGPT for Libel?

The world wide web of reputation destruction is here
Isn’t there a law against falsely accusing people of serious crimes or misconduct and then publishing damaging lies to the world? Yes. For centuries in English-speaking countries, the victim of such lies could sue the false accuser in civil court for libel per se. Nowadays, libel and its oral statement cousin, slander, are grouped together as defamation. Under American law, it isn’t easy to bring and win a lawsuit even when your case seems strong, but at least the law provides some recourse for defamation. How about when the false accuser is ChatGPT? Jonathan Turley, the nationally known George Washington University law professor and commentator, woke up one morning to discover: ChatGPT falsely reported on a claim of sexual harassment that was never made Read More ›

AI in the Courtroom: How to Program a Hot Mess

Could AI make competent judicial choices in the court?
Imagine we’re assigned to design the artificial intelligence (AI) software to carry out legal analysis of cases like a human judge. Our project is “CourtGPT,” a system that receives a factual and legal problem in a case where there are two opposing parties, analyzes how certain statutes and other legal principles apply to the facts, and delivers a decision in favor of one of the parties. CourtGPT will make “legal decisions,” not decide “jury questions of fact,” and thus will function like a judge (not juror). To write a computer program of any complexity, we start by describing the entire program’s operations in English (my native tongue). Pro tip: If you cannot describe how your program operates in human language, then you cannot Read More ›

Love Thy Robot as Thyself

Academics worry about AI feelings, call for AI rights
Riffing on the popular fascination with AI (artificial intelligence) systems ChatGPT and Bing Chat, two authors in the Los Angeles Times recently declared: We are approaching an era of legitimate dispute about whether the most advanced AI systems have real desires and emotions and deserve substantial care and solicitude. The authors, Prof. Eric Schwitzgebel at UC Riverside, and Henry Shevlin, a senior researcher at the University of Cambridge, observed AI thinkers saying “large neural networks” might be “conscious,” the sophisticated chatbot LaMDA “might have real emotions,” and ordinary human users reportedly “falling in love” with chatbot Replika.  Reportedly, “some leading theorists contend that we already have the core technological ingredients for conscious machines.”  The authors argue that if or when Read More ›

ChatGPT: Beware the Self-Serving AI Editor

The chatbot "edits" by reworking your article to achieve its own goals, not necessarily yours
My article, Utopia’s Braniac (short title), reported results from experiments showing that for one, ChatGPT actually lies, and secondly, it gives results plainly biased to favor certain political figures over others. I next ran a follow-up experiment: asking ChatGPT to “edit and improve” the Utopia’s Brainiac manuscript before submitting it.  Close friends told me they’d used ChatGPT to improve their written work and said the process is easy. So, I tried it myself on February 6, 2023. I entered “Please edit and improve the following essay” and pasted my piece in full text (as ultimately published). In under a minute, ChatGPT delivered its edited and revised copy. What did it do? I. Deleted Whole Section That Gave Readers an Everyday Context Read More ›

Utopia’s Brainiac? ChatGPT Gives Biased Views, Not Neutral Truth

Look at what happens when you try to get ChatGPT to offer unbiased responses about political figures
Do you trust your pocket calculator? Why?  Maybe you’re using the calculator app on your phone. Enter: 2 + 2. You get an answer: 4. But you knew that already. Now enter 111 x 111. Do you get 12,321? Is that the correct answer? Work it out with a pencil. That answer is correct. Try 1234 x 5678.  My calculator app returns 7,006,652. Correct? I’m not going to check it. I’m going to trust the calculator. And so it goes. The harder the problem, the more we trust the computer. That’s one reason why many people trumpet the powers of Artificial Intelligence (AI) systems. Those systems can give answers to problems we individuals couldn’t solve in a lifetime.  But are the AI “answers” correct?  Read More ›

You’ve Got a Robot Lawyer in Your Pocket (Really?)

The DoNotPay AI lawyer program might be useful for fighting parking tickets but it is unsuited to serious litigation where much more complex issues are at stake
The Gutfeld! program on Fox News on January 6, 2023, recently had fun discussing robots replacing lawyers to practice law. In faux serious rhyme, Greg Gutfeld intoned: “Can a computer that’s self aware, keep you from the electric chair?” Sparking the conversation was the report that an artificial intelligence (AI) smartphone app was slated to assist a defendant fighting a parking ticket in a currently-undisclosed courtroom: Gigabytes of text could stream forth addressing the near infinite number of questions raised about robot lawyers. For now, let’s just explore the “robot lawyer” app built by DoNotPay. The company’s website declares: “The DoNotPay app is the home of the world’s first robot lawyer. Fight corporations, beat bureaucracy and sue anyone at the Read More ›

Defining the Role of AI in Patents

Recently, a piece of art called “Théâtre D’opéra Spatial” took home the first-place prize at the Colorado State Fair’s fine art competition in the category of digital arts/digitally manipulated photography. The art was generated using AI. Can AI hold a copyright? Can a human hold a copyright for a piece of artwork that they used AI to generate? Robert J. Read More ›

Can AI Be Issued Patents?

Should a computer program ever be listed as an inventor of a patent? Would AI have any right to sue for patent infringement? The US Patent Office has ruled that only “natural persons” can own patents, not machines, but should that change? Robert J. Marks discusses patent law and artificial intelligence with attorney and author, Richard W. Stevens. Additional Resources Read More ›

Patents and the Creativity Requirement

A new invention has to produce unexpected or surprising new results that were not anticipated by existing technology in order to be patented. Can computers generate something outside the explanation or expectation of the programmer? Robert J. Marks discusses patent law, creativity, and artificial intelligence with attorney and author, Richard W. Stevens. Additional Resources

Law: Doe vs. GitHub Is a Non-Crisis

Despite worrisome headlines in the media, Doe v. GitHub, Inc. would protect licensed software code without blocking AI systems from using internet data for “learning”
Headline at The Verge: “The lawsuit that could rewrite the rules of AI copyright.” Wired similarly declares: “This Copyright Lawsuit Could Shape the Future of Generative AI.” The subtitle warns: “Algorithms that create art, text, and code are spreading fast — but legal challenges could throw a wrench in the works.” Indeed, two putative class action lawsuits were filed in the Northern District of California federal district court in November 2022 against GitHub, GitHub’s owner Microsoft, OpenAI and others. The lawsuits allege that two interrelated artificial intelligence (AI) software systems are continuously violating the 1998 Digital Millennium Copyright Act (DMCA) as well as breaching contracts, engaging in unlawful competition, and violating California state privacy laws. Attorney and programmer Matthew Butterick Read More ›