Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

AI Should Never Have “Rights”

Original Article

Efforts to expand rights beyond the human realm are ubiquitous and reflect, in my view, a deep misanthropy and a threat to universal human rights.

That includes the movement to declare sophisticated artificial intelligent machines (“strong AI,” not yet here) to be “persons,” entitled to entry into the moral community. Today, there is an extensive discussion of this meme in Religion & Politics. From, “As Artificial Intelligence Advances, What are Its Religious Implications?”:

This strong AI, also known as artificial general intelligence (AGI), has not yet been achieved, but would, upon its arrival, require a rethinking of most qualities we associate with uniquely human life: consciousness, purpose, intelligence, the soul—in short, personhood. If a machine were to possess the ability to think like a human, or if a machine were able to make decisions autonomously, should it be considered a person?

But no machine will ever ”think like a human.” Our thought processes are not solely computations. They involve the unquantifiable aspects of being alive, e.g., emotions–which no inanimate object could ever actually feel–experience, unconscious input, memories, hormones, genetics, on and on and on.

Indeed, we don’t really understand the nature of consciousness. At most, a strong AI computer would mimic human thought. It would not actually “think,” it would compute. Again, not synonyms.

Supposedly, this would have significant religious implications:

The personhood debate, for Christianity and Judaism in particular, originates with the theological term imago Dei, Latin for “image of God,” which connotes humans’ relationship to their divine creator.

The biblical book of Genesis reads, “God created mankind in his own image.” From this theological point of view, being made in the divine image affords uniqueness to humans.

Were people to create a machine imbued with human-like qualities, or personhood, some thinkers argue, these machines would also be made in the image of God—an understanding of imago Dei that could, in theory, challenge the claim that humans are the only beings on earth with a God-given purpose.

Please. They would be machines, without any “eternal” significance, manufactured and at least initially programmed by us to achieve purposes that we established. They could not die. AI could only be “on” or “off.”

And of course, no machine would have a soul as that is an incorporeal concept, not a materialistic one. Hence, no machine would ever be implicated in religious concepts such as sin, Salvation, damnation, reincarnation, etc.

This could happen:

Araya speculates that machines with strong artificial intelligence could become objects of worship in and of themselves: “There’d be religious movements that worship AI.” If a machine possesses cures for long-existing fatal diseases, knew how to improve education, and brought order to society, would humans idolize it?

So what? Some people will “worship” almost anything. It is, after all, the unique aspect of being human that compels us to quest for meaning and purpose.

But isn’t it interesting that anti-human exceptionalists–whether transhumanists who want to merge with computers, animal rights activists, nature rights activists, utilitarian bioethicists, etc.–use our capacities as their lodestar for comparison?

So, invent AI machines as useful sophisticated tools. But let’s stop the nonsense of declaring anything but humans to be “persons.” We are the only “who” in the known universe. Everything else is a “what.”

“Strong AI” machines would have no more moral importance in and of themselves than a toaster.

Wesley J. Smith

Chair and Senior Fellow, Center on Human Exceptionalism
Wesley J. Smith is Chair and Senior Fellow at the Discovery Institute’s Center on Human Exceptionalism. Wesley is a contributor to National Review and is the author of 14 books, in recent years focusing on human dignity, liberty, and equality. Wesley has been recognized as one of America’s premier public intellectuals on bioethics by National Journal and has been honored by the Human Life Foundation as a “Great Defender of Life” for his work against suicide and euthanasia. Wesley’s most recent book is Culture of Death: The Age of “Do Harm” Medicine, a warning about the dangers to patients of the modern bioethics movement.