Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I, for One, Welcome Our New Robot Overlords

Original Article

Should we fear the rise of ‘intelligent’ computers?

In case you haven’t heard, the newest champion of “Jeopardy!,” the popular TV game show, is a computer. Watson, an enormous computer developed by researchers at IBM, was pitted against the two previous human champions, Brad Rutter and Ken Jennings. At the end of the first round, aired on Valentine’s Day, Jennings and Watson were tied for first place. But Watson trounced both humans in the next round, despite making some odd mistakes. And he won the second game, aired on February 16, suggesting the first victory was more than just beginner’s luck.

When the IBM computer Deep Blue beat chess champion Garry Kasparov in 1997, it was not doing anything qualitatively different from an ordinary calculator. It was just calculating really quickly—running through all the possible chess moves in response to the previous move by Kasparov and picking the one most likely to succeed. That’s just the sort of problem that a fast-enough computer running the right algorithm was bound to solve.

In the years since then, computers have gotten much better at accomplishing well-defined tasks. We experience it every time we use Google. Something happens—“weak” artificial intelligence—that mimics the action of an intelligent agent. But the Holy Grail of artificial intelligence (AI) has always been human language. Because contexts and reference frames change constantly in ordinary life, speaking human language, like playing “Jeopardy!,” is not easily reducible to an algorithm.

In “Jeopardy!,” a “question”∗ may be historical, scientific, literary, or artistic. It may employ a pun, or require a contestant to think of a word that rhymes with another word that is not mentioned in the question. To succeed, you need something like mastery of language. Even the best computers haven’t come close to mastering the linguistic flexibility of human beings in ordinary life—until now. Although Watson is still quite limited by human standards—it makes weird mistakes, can’t make you a latte, or carry on an engaging conversation—it seems far more intelligent than anything we’ve yet encountered from the world of computers.

In a test round of “Jeopardy!,” for instance, the host gave this answer: “Barack’s Andean pack animals.” Watson came up with the right question almost instantly: “What is Obama’s llamas?” We’re getting a glimmer of the day when a computer could pass the “Turing Test,” that is, when an interrogating judge won’t be able to distinguish between a computer and a human being hidden behind a curtain.

Artificial intelligence gives lots of people the creeps. When I tell friends and family about Watson, most of them think of Terminator or The Matrix. They see Watson’s victory as a portent of some future cataclysm, when machines will take over the world and reduce human beings to slavery. Maybe everyone I interact with has become a Luddite, but that seems unlikely. I live in Seattle, after all.

As it happens, this fear of technology by the tech-savvy is quite common. In 1998, inventor and futurist Ray Kurzweil described the coming age of “spiritual machines” at a Telecosm Conference sponsored by George Gilder and Forbes Magazine. Kurzweil’s vision of man-machine hybrids, conscious computers, and human beings casting off our fleshy hardware for something more permanent elicited a variety of responses, including one by Bill Joy of Sun Microsystems. Joy penned a famous piece for Wired magazine in which he called for government to limit research on the so-called “GNR” technologies (genetics, nanotechnology, and robotics). These were the most ethically troubling technologies because, in Joy’s opinion, they were most likely to open Pandora’s box. Joy, who had enjoyed decades of unfettered research and entrepreneurial creativity, had now fingered the true enemy of humanity: the free market.

Talk about an overreaction. Still, part of the blame must rest with AI enthusiasts, who aren’t always careful to keep separate issues, well, separate. Too often, they indulge in utopian dreams, make unjustifiable logical leaps, and smuggle in questionable philosophical assumptions. As a result, they not only invite dystopian reactions, they prevent ordinary people from welcoming rather than fearing our technological future.

Accelerating Returns

That’s unfortunate, because at the core of AI technology is a fascinating phenomenon. The always-interesting Kurzweil has alerted us to the fact that computer technology does not just improve linearly over time, but exponentially. His observation extended the famed “Moore’s Law.” Named for Intel’s Gordon Moore, it describes (more or less) the fact that computer speed doubles every 18 to 24 months. In particular, Moore had noticed the increasing number of transistors that could be placed on an integrated circuit over the previous few decades. The skeptic might think that Moore was simply describing some physical property of transistors and integrated circuits, and not a more general trend. But Kurzweil discovered that this trend of accelerating returns had held for a hundred years across entirely different computer technologies. To him and many others, this suggested something about the nature of technological innovation itself.

On the assumption that the trend will continue indefinitely, Kurzweil has predicted a “singularity,” a future moment when technological change is “so rapid and so profound that it represents a rupture in the fabric of human history.” That sounds like science fiction, but lots of otherwise serious people take it seriously. In a largely sympathetic story on Kurzweil and the singularity in Time magazine, Lev Grossman tells us that the singularity is near. In fact, 2045 is the year “man becomes immortal.” This is within the lifetime of many people reading this piece, and easily within the lifetime of most of our children.

Kurzweil recently co-founded Singularity University, where academics, technology experts, scientists, CEOs, and others gather to discuss the implications of AI. As the discussion goes mainstream, we can expect more widespread worry that Prometheus is about to start a wildfire. But rather than be pulled to and fro by utopian and dystopian visions of our future, we would all do well to keep a few things in mind.

Weak AI Is Not Strong AI

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

Artificial Intelligence Doesn’t Diminish Our Dignity

Champions and critics of AI often assume that the advent of increasingly “intelligent” machines challenges our dignity. But why is that? Did the invention of the wheel, the tallow candle, the abacus, the car, the plane, or the calculator weaken our status in the grand scheme of things? Hardly. Each of these technologies is an example of human ingenuity and creativity. Was the inventor of the wheelbarrow made weaker because he created something that could carry more than he could carry by himself? Of course not. He used his God-given ingenuity to enhance his own productivity and the productivity of everyone else who used a wheelbarrow.

We should respond the same way to AI technology. Instead of being concerned that a computer can beat champions at “Jeopardy!,” we should admire the achievements of David Ferrucci (principal researcher for Watson at IBM) and the other human engineers who spent years designing Watson. Surely theirs is a greater, and ultimately more beneficial, achievement than answering a bunch of questions on a game show.

We have no guarantees, but let’s hope we continue to enjoy accelerating returns in computer technology. We should think seriously about what this would mean for the future. But we should not get distracted by the fanciful, and distinct, ideas that have come to be associated with artificial intelligence.

Jay W. Richards, PhD, is a senior fellow and director of research at Discovery Institute, a contributing editor to THE AMERICAN, and author of Money, Greed, and God: Why Capitalism is the Solution and Not the Problem.

FURTHER READING: Richards has pursued other relationships between physical and metaphysical realities, including “The Immateriality of Wealth,” “Did Physics Kill God?” and  “When to Doubt a Scientific ‘Consensus.’” Karlyn Bowman explains “The Public View of Regulation, Revisited,” Roger Scruton discusses the Internet’s effect on people in “Hiding Behind the Screen,” and Nick Schulz discusses why “Information Technology Remains the One U.S. Economic Ace.”

∗ In Jeopardy!, the contestant selects a category, then the host provides an “answer” to a question from that category, for which the contestant tries to provide the question. So the host might say: “The color of the sky” and the contestant would reply: “What is blue?”

Jay W. Richards

Senior Fellow at Discovery, Senior Research Fellow at Heritage Foundation
Jay W. Richards, Ph.D., is the William E. Simon Senior Research Fellow at the Heritage Foundation, a Senior Fellow at the Discovery Institute, and the Executive Editor of The Stream. Richards is author or editor of more than a dozen books, including the New York Times bestsellers Infiltrated (2013) and Indivisible (2012); The Human Advantage; Money, Greed, and God, winner of a 2010 Templeton Enterprise Award; The Hobbit Party with Jonathan Witt; and Eat, Fast, Feast. His most recent book, with Douglas Axe and William Briggs, is The Price of Panic: How the Tyranny of Experts Turned a Pandemic Into a Catastrophe.