Discovery News

Fed Up in Seattle

Don’t believe the hype that “Amazon killed the Seattle head tax,” the new levy that the city recently passed on businesses to fund an affordable-housing initiative. The truth behind the city council’s stunning reversal—repealing the tax by a 7-2 vote, just four weeks after passing it 9-0—is that Seattle citizens have erupted in frustration against the city’s tax-and-spend political class that has failed to address the homelessness crisis, despite record new revenues. As recently as a few years ago, it seemed as if Seattle voters largely viewed our hyper-progressive city council as a harmless oddity in an otherwise tolerant, thriving, liberal city. But times have changed. Now, according to recent public polling, 83 percent of Seattle voters are dissatisfied with how the council has addressed homelessness, 65 percent believe that the local government hasn’t used new tax revenues effectively, and 63 percent believe that the city has enough money to solve the problem but isn’t pursuing the right policies. Go to Story (offsite) ›


Make the Seattle City Council Great Again

There seem to be cycles in city politics. Fifty years ago a small band of Young Republicans and Young Democrats came together in an unusual alliance to overturn the existing Seattle City Council. They called themselves CHECC: Choose an Effective City Council. It took a couple of elections, but they prevailed and it was then — in the 1970s — that formerly sleepy, somewhat stodgy Seattle began to get national attention as the “most livable city.” Sixty years before that, in the early 20th century, another group of novice politicians introduced the “Progressive Era” that gave us Seattle’s city water and light dams (providing abundant, cheap water and electricity), the public park system we enjoy today and the ship canal connecting Puget Sound and Lake Washington. “Progressive” back then meant progress. It did not mean endless tax increases untethered to clear programs, government by protest demonstration and an ineffectually managed homelessness crisis. Go to Story (offsite) ›

The Wages of Death

Twenty-five years ago, Newsweek published my first essay. In the wake of my friend’s suicide under the influence of the Hemlock Society, I worried that some suicides would be “promoted as a virtue” if assisted suicide, or euthanasia, was ever accepted. (Assisted suicide involves a doctor’s knowingly prescribing drugs for use in the patient’s suicide; euthanasia involves a doctor’s lethally injecting the patient.) After that, I predicted, eligibility for hastened death would expand to those “who don’t have a good ‘quality’ of life,” “perhaps with the prospect of organ harvesting thrown in as a plum to society.” I thought the essay would be uncontroversial. Then came the hate mail, at a decibel level that I had not experienced theretofore in my years of public-policy advocacy. Euthanasia is a human right, I was told angrily. Correspondents hoped that I would die slowly of a painful cancer. I was called a religious fanatic (even though I had not mentioned religion), an alarmist, a fantasist, and a sadist. Nothing like what I feared would ever happen if society legalized assisted suicide or euthanasia, I was reassured. In the years since, however, doctor-administered lethal-injection euthanasia has been legalized in the Netherlands, Belgium, Canada, Luxembourg, and Colombia. Legal doctor-assisted suicide has followed in six U.S. states, plus the District of Columbia — the latest, Hawaii, enacted its statute in early April — as well as in the Australian  province of Victoria and in Germany, where it was imposed by court order. In Switzerland, the previously little-known 1942 law permitting assisted suicide has become the basis for a flourishing “suicide tourism” industry. Tens of thousands of people have now been legally killed or assisted in suicide by doctors in these jurisdictions. Not only have many of the worst fears that I expressed in 1993 been realized, but, in some ways, things have become more radical than I ever dreamed of. The Netherlands led the charge down the slippery slope. Assisted suicide was decriminalized in the 1970s as long as doctors followed supposedly strict guidelines, and the categories of those eligible to be killed expanded steadily thereafter. That process has accelerated especially since formal legalization in 2002. Currently, more than 6,000 people die in the Netherlands by euthanasia and assisted suicide each year. Killable people now range from the terminally ill and the chronically ill, such as a woman with serious tinnitus, to people with disabilities, such as people with paralysis and chronic alcoholics, dementia patients who ask to be euthanized in advance directives, the elderly with non-life-threatening health concerns or early dementia — and even 83 mentally ill patients in 2017. According to the medical journal JAMA Psychiatry, in recent years “depressive disorders were the primary issue” in 55 percent of Dutch mental-illness euthanasia cases. And babies born with serious disabilities, such as spina bifida, or with terminal conditions are lethally injected under a neonatal euthanasia protocol. There have been many clear cases of abuse: the elderly woman euthanized for macular degeneration, the anorexic young woman put down because of the suffering she experienced from her eating disorder, the nursing-home doctor who euthanized a patient he thought had lung cancer before the diagnosis was confirmed. A particular 2016 case stands out in its horror. Prior to becoming unable to care for herself, a woman with dementia wrote a note stating she never wanted to live in a nursing home. Despite that, she was institutionalized, where she became afraid and wandered the halls — typical symptoms of Alzheimer’s disease. Her doctor — without asking — decided the time had come for her life to end. The doctor drugged the woman’s coffee so that she would sleep while being killed, a violation of euthanasia rules. Then things really went awry. According to the Daily Mail, while the doctor was attempting to lethally inject her, the woman woke up and fought to save her life:
The paperwork showed that the only way the doctor could complete the injection was by getting family members to help restrain her. It also revealed that the patient said several times “I don’t want to die” in the days before she was put to death, and that the doctor had not spoken to her about what was planned because she did not want to cause unnecessary extra distress.
Can you imagine a woman struggling against being killed being held down by her own family? By any reasonable measure, that was murder. But a Regional Review Committee inquiry exonerated the doctor because she had “acted in good faith.” The Belgians have taken their euthanasia regime to even more-radical extremes. Mentally ill patients have been voluntarily euthanized, and their organs immediately harvested, after which the Dutch started doing that, too. The Belgians also pioneered joint euthanasia deaths of elderly couples who would rather die than face widowhood. The death doctor in one of these cases was procured by the couple’s son, who told a reporter that this was the best thing to do because he could not care for them. Joint geriatric euthanasia has also ended the lives of elderly couples in the Netherlands, at a Swiss suicide clinic, and, most recently, in Canada. Belgian euthanasia has grown so wild that a doctor who had supported legalization and served as an oversight official resigned from his responsibilities because of the number of abuses that had passed through his committee with nothing done to hold the wrongdoers to account. He wrote about one case in particular:
The most striking example took place at the meeting of Tuesday, September 5, 2017: a euthanasia of a deeply demented patient with Parkinson’s disease, by a general practitioner who is totally incompetent, has no idea of palliation, done at the request of the family. The intention was to kill the patient. There was no request from the patient.
Canada is racing down the same road. After the Canadian supreme court conjured a right to receive euthanasia if the patient has a diagnosed condition causing irremediable suffering, including psychological suffering as defined by the patient, meaning that there is no objective test, the country embraced what is known as “MAID” — medical assistance in dying — with great gusto. The Canadian parliament legalized euthanasia across the country in response but limited euthanasia to circumstances where death is “foreseeable” — whatever that means. Even that condition has come under legal attack as too restrictive. In any case, the College of Physicians and Surgeons of British Columbia issued an ethics opinion that a patient who doesn’t qualify for euthanasia can make himself eligible simply by starving himself or refusing treatment to the point where he can be judged to be “declining toward death.” An Oregon bureaucrat has made a similar determination under that state’s assisted-suicide statute. Under the law, to receive doctor-assisted suicide, the patient — for now — must be reasonably expected to die within six months. (Demonstrating the uncertainty of such a diagnosis, some patients who received lethal prescriptions but didn’t take them lived for years afterwards.) When asked by a Swedish researcher whether diabetic patients who stopped taking insulin or patients who could not afford curative treatment could thereby qualify for a lethal prescription under the law, Craig New, a research analyst at the Oregon Health Authority, answered in the affirmative:
In your two examples, both patients would qualify for the DWDA [Death with Dignity Act]. Patients suffering from any disease (not just those that typically qualify one for the DWDA) may not be able to afford some treatments or medication, and may choose not to pursue some treatments or take some medication for personal reasons. . . . If the patient does not receive treatment or medication (for whatever reason) and is left with a terminal illness, then s/he would qualify for the DWDA. [Emphasis added].
“I think you could also argue,” New continued, “that even if the treatment/medication could actually cure the disease, and the patient cannot pay for the treatment, then the disease remains incurable.” In other words, the six-months-to-live restriction has already been stretched to include people who would live longer, perhaps indefinitely, if they received medical treatment. And what about the doctors? What qualifications must they have to participate in assisted suicide? They do not need to have any significant experience in treating the patient’s malady leading to the death request. Thus, Lonnie Shavelson, a California part-time emergency-room physician and assisted-suicide activist — who had in recent years mostly practiced advocacy journalism rather than medicine — opened a death practice in which he charges $2,000 to counsel patients and write a lethal prescription. And what experience would an ER doctor have in treating terminal illnesses such as cancer, ALS, or renal disease? Beyond medical school and residency, not much. Similarly, a Belgian oncologist lethally injected Godelieva De Troyer — who did not have cancer but suffered from long-term depression, a malady clearly outside the doctor’s specialized training. The first that De Troyer’s son, Tom Mortier, heard about the planned death was when the hospital called to have him pick up his mother’s corpse. Here’s the moral of the story. The “strict guidelines” that activists promise will protect against abuse don’t, and, indeed, the restrictions erode with time. Legalizing assisted suicide and euthanasia shifts mindsets, and, as a consequence, people don’t much care about the steady increases in assisted suicide that follow legalization, or about clearly abusive cases, which they would once have found abhorrent, that come to light. Finally, euthanasia and assisted suicide corrupt everything they touch: the doctor–patient relationship, familial bonds, and our embrace of the intrinsic value of human life. This includes society’s commitment to suicide-prevention services, which these days are usually not offered to those who are suicidal as the result of a terminal illness. The debate over assisted suicide should encompass what the regime of death will become and where it will lead. There is more to this argument than simply whether assisted suicide should be legalized for certain categories of individuals. It is a pretense that the practice will always be limited to the dying for whom nothing else can be done to alleviate suffering. Those with eyes to see, let them see. Go to Story (offsite) ›

The Deeper Meaning of Memorial Day

Memorial Day had its origin as Decoration Day following the Civil War, but it only became an official federal holiday to honor those who lost their lives while serving in the U.S. armed forces in 1971. Memorial Day is also an occasion to associate those who died in the just causes for which the United States was willing to go to war. World War I, World War II, Korea and Vietnam were conflicts where freedom was clearly at stake. The post September 11 engagements in Afghanistan, Iraq and Syria remain a bit more complicated, being associated with responses to abuse of power in divided countries and to transnational radical Islamist terrorism. October 2018 marks the commemoration of 25 years since U.S. forces suffered defeat in Somalia. But we still commemorate the courageous Army Rangers and Delta Force members who fought and died in the chaotic streets of Mogadishu so that their fellow soldiers could survive against overwhelming odds, depicted in the popular film “Black Hawk Down.” Some U.S. military engagements were ill-advised and injustices were committed along the way. History shows that the injustices were probably greater in actions taken by Washington politicians and bureaucrats than by the military in the field. For instance, the government’s willingness to authorize and deploy military force — putting American lives in harm’s way without clear and realistic objectives and plans for victory — was the great injustice of the Vietnam War. In other cases, such as in Iraq, President Obama’s political decision to withdraw U.S. military forces by the end of 2011 directly led to the injustice of reversal of hard-fought gains made by the military in the prior eight years, and resulted in a power vacuum filled by the rise of ISIS and growing Iranian influence. Go to Story (offsite) ›

From Darwinism to Dataism: Will We Lose Our Representative Democracy to Techno-Religion?

Science fiction writers have long understood that when tyranny comes it often is introduced as some improvement, or as the correction of some perceived problem. C. S. Lewis, for example, warned of the therapeutic state that wants what is best for us, whether we ask for it or not. It starts as science, becomes scientism, then demands obedience. Jeremy Rifkin is a philosopher of Big Data in our own time who has a Marxist view of human good, organized in the “Commons,” whose space, according to his book “The Zero Marginal Cost Society: The Internet of Things, the Collaborative Commons, and the Eclipse of Capitalism,” is “more basic than both business and the market.” He writes that “The very purpose of the new technological platform is to encourage a sharing culture, which is what the Commons is all about.” There is not much privacy or individualism left in this Commons, however. In the future, there may not be much left of old fashioned elites (or much need for politicians), but there definitely will be wise, all-knowing techno-elites. The old world will give way to a “stream of Big Data on the comings and goings of society that can be accessed and shared collaboratively … processed with advanced analytics, transformed into productive algorithms, and programmed into automated systems.” A similar vision is offered by an Israeli advocate of artificial intelligence (AI), Yuval Noah Harari, who supposes that science is showing the uselessness of inherited biology and traditional human roles. His book, “Homo Deus,” is steeped in Darwinism and what he understands to be Alan Turning’s information theory. Like Marx announcing in the 19th century that the next phase of the industrial revolution would make capitalism obsolete, Hariri sees AI creating a new human being independent of our present economy. “Science is converging,” he proclaims, “on an all-encompassing dogma, which says that organisms are algorithms and life is data processing ... Intelligence is decoupling from consciousness … Non-conscious but highly intelligent algorithms may soon know us better that we know ourselves.” He candidly acknowledges that the system is a “techno-religion” he calls “Dataism.” It replaces religion and “venerates neither gods nor man—it worships data.” Says Hariri, “Political scientists increasingly interpret political structures as data processing systems.” Yes, they do that, which is part of our problem. If an algorithm already knows what we think, want and need, why bother with politicians or representative government?  Hariri seems to agree. “This implies that as data-processing conditions change again in the 21st century, democracy might decline and even disappear.” And you’ll love it. “(P)eople want to be part of (this) data flow, even if that means giving up their privacy, their autonomy and their individuality.” Maybe. Some people already seem eager to surrender to their smart phones. But Marxist ways of seeing technological change often fall apart with experience. So, let the utopias of Rifkin and Hariri—or the dystopias they would become—gestate as they will. Most of us will be turned off by the prospect. I don’t think that people, upon reflection, will give their political power to an algorithm, even one disguised as progress. The Dataism future should be seen as cautionary, not predictive. The danger of losing representative democracy to techno-religion might be just the jolt needed to restore our defense of it. It should warn us to cast a sharp eye on short term, but unsavory trends now underway and leading our society in the wrong direction. In a political order, affinity groups help the politician think through public issues. But we now have some anti-democratic groups whose interests were ignored in the bad old days of news dominance by newspapers and broadcasters who can find in the Internet useful ways to organize in relative secrecy. Real and fraudulent Internet outfits that traffic in sleazy speculation have found ways to get stories into the mainstream, partly by seducing hit-lusting advertisers. In the past, scrupulous editors screened out the smears, incendiary provocations and smut. Instead, political news now freely spirals downward. An imminent threat to civil politics could be the depersonalization of public meetings held over the Internet. Face to face encounters, where people use their own names, are generally polite. Knowing who is in a meeting—and knowing in return that one is known—conduces to mutual toleration and respect. But in a virtual community, a “believability meter” could easily be replicated, with "instant feedback" from Internet attendees, destroying a candidate before he had even finished his remarks, and encouraging other candidates to appeal to superficial and heated sentiments. In a blunt assessment from another context, Publius (Madison) warns us, “Had every Athenian citizen been a Socrates, every Athenian assembly would still have been a mob.” Today it can be a cyber-mob. And history teaches that violence and anarchy—the mob—lead to crises of authority and, then, tyranny. Most of our cherished democratic institutions, after all, assume personal encounters at some point, and our system seems to work best when those circumstances are maintained, such as the localities where parties still function and people turn out for a “candidate’s night.” Our traditional sense of community is rooted in geographical identities and the same kind of local loyalties that, for example, support a specific high school basketball team rather than some national association of basketball fans. Our representative democracy presupposes the shared and particular interests of specific places, not an attachment to uncommon and dispersed special interests or to abstract opinion—let alone to an algorithm. The dangers of assigning superior moral worth to “Big Data” and AI should be obvious by now. Their connections to the ideologies of Darwinism and Marxism also should be manifest, even if much of the academic community conspires to hide them. Bruce Chapman is founder and Chairman of Discovery Institute, and author of the new book “Politicians: The Worst Kind of People to Run the Government, Except for All the Others.” Go to Story (offsite) ›


New Book Says Politicians Are “The Worst Kind of People to Run the Government, Except for All the Others”

Political “middlemen” who infringe on the relationship between the people and their elected representatives constitute a growing danger to democracy, according to new book, Politicians, by Bruce K. Chapman. “Politicians themselves are partly to blame for ceding responsibilities to unelected powers,” says Chapman, himself a former elected and appointed official. “Those powers include bureaucrats and judges, but also media, academics, non-profit cause groups, ‘professional reformers’ and  campaign businesses that ‘live off of’ politics, rather than ‘for it.” A good example of shifted responsibility, says Chapman, is Congress’ relinquishment of authority to government regulatory agencies. Another, Chapman says, is the “scandal business” that increasingly monopolizes public attention and is incentivized by unrealistic federal legislation. The advent of social media, which might Read More ›