Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I Want a New Drug

Why Price Controls Will Stop Pharmaceutical Progress Originally published at The Weekly Standard

FACED WITH RISING Medicaid costs, the states have begun to trumpet the oldest illusion about government power — that price controls can make things abundant and “affordable,” in this case prescription drugs.

On May 19 the U.S. Supreme Court gave the green light to a Maine program that includes thousands of uninsured citizens in a discount drug-buying program the state has forged for Medicaid patients. The decision rang the gong for other states to start similar efforts. Within a week, Ohio Republicans announced they would no longer oppose a similar discount program for the uninsured. New York, Minnesota, Texas, Michigan, and twenty-one other states have similar schemes afoot. There is even talk about forming regional cooperatives to impose what amounts to price controls on the entire drug industry.

These programs go far beyond normal bargaining. In Maine, for example, if drug companies refuse to negotiate with state officials over discounts for the uninsured, their products will be subjected to extra scrutiny for Medicaid reimbursement. The new law also gives state officials authority to set prices unilaterally — as they already do with Medicaid purchases. A private company that behaved like this would be prosecuted for antitrust.

As with any price controls, present consumption will be favored over future development. This will be a slow-motion disaster for American medicine. Pharmaceutical companies now invest 18 percent of their revenues in research and development, the highest of any economic sector. Nine of the top twenty research spenders are pharmaceutical companies. Americans as a result have enjoyed, and come to take for granted, a spectacular outpouring of new medicines for AIDS, cancer, Alzheimer’s, congestive heart failure, cystic fibrosis, depression, and a host of other diseases. With fewer revenues to invest, that pipeline will eventually slow to a trickle.

Drug-price-control initiatives are based on the faulty perception that prescription drugs are the cause of medical inflation. “During the past two years, spending on health care has increased by more than $200 billion, a jump of nearly 17 percent, primarily because of the rising cost of prescription drugs,” writes national financial columnist Lou Dobbs. This is nonsense. Prescription drugs were only 9 percent of health care costs (hospitals absorb 32 percent, doctors 22 percent) in 2000. Drug prices would have had to double each year to account for this $200 billion increase.

In fact, new drugs commonly substitute for more expensive treatments such as surgery and hospitalization. Treating stroke patients with new clot-busting drugs has saved $4,400 per patient by cutting hospitalization and rehabilitation costs, according to a study sponsored by the National Institutes of Health. Humana Hospitals found that while new drugs for congestive heart failure increased pharmacy costs 60 percent, they cut hospital costs 78 percent, saving $9,000 per patient. Not incidentally, the same drugs also cut mortality rates from 25 percent to 10 percent.

Prescription drugs are expensive — there is no doubt about that. Some ordinary antibiotics or antihistamines now run close to $100 per prescription. Almost everyone with private insurance has drug coverage, although some copayment is usually required. Medicaid provides access to the poor. As usual, the uninsured — generally people employed in small businesses — are a problem.

Certain to bring the crisis to a boil are proposals from Congress and the Bush administration to cover prescription drugs through Medicare. The drug industry — perhaps a bit foolishly — is supporting the initiative, figuring it will pay the bills. A more likely scenario is that Medicare itself — following the example of the states — will become a new instrument for imposing price controls. A better approach would be to look at what makes prescription drugs so expensive in the first place, and whether these development costs can be reduced.

An obvious target would be the Food and Drug Administration (FDA) approval process, which is rapidly becoming obsolete. Founded during the Progressive Era, the FDA kept close watch over the toxic dangers of new drugs until the thalidomide birth defects of the early 1960s — an episode that mainly affected Europe and was successfully prevented by the FDA in this country. Nonetheless, Congress took the occasion to expand the FDA’s responsibilities to include testing for efficacy as well as toxicity.

Efficacy testing adds years and hundreds of millions of dollars to the approval process. Desperate patients wait indefinitely while FDA regulators chew their pencils and scratch their heads, looking for more convincing evidence. Meanwhile, with the spread of information on the Internet, clinical trials for efficacy are becoming more and more difficult to complete. Say you’re dying of cancer. Would you be willing to participate in an FDA trial where there is a 50 percent chance you will be receiving a placebo? “An increasing number of trials are now falling apart as soon as there are perceived results,” says Tom Miller, health policy analyst at the Cato Institute. “It’s also getting harder and harder to recruit volunteers.”

Rather than allowing an orderly progression of new products at reasonable prices, efficacy testing has turned the industry into a casino. For every 5,000 new compounds the industry screens, 250 are chosen for preclinical testing, according to Pharmaceutical Research and Manufacturers of America (PhRMA). Five of these will enter long-term clinical trials. Only one will be approved, says Joseph DiMasi of the Tufts Center for the Study of Drug Development. Thus, each marketed drug must earn back on average $1 billion in FDA testing costs. But only three of ten marketed drugs earn back even their own investment.

The syndrome has left even the most successful drug giants highly dependent on one or two blockbusters. Merck has Vioxx, a $3-billion-a-year painkiller; Pfizer has Viagra; Eli Lilly has Prozac. But few of their other products make money. As patents on these meal tickets approach expiration, investors get jittery. “Clinical trials have become so expensive, it’s very difficult to be a mid-sized player,” says Ben Bonifant, a life sciences specialist at Mercer Management Consulting, Inc.

Profits have become even more tenuous because of generic imitators, which have prospered since the 1984 Hatch-Waxman Act. Before 1984, generic drugs also had to undergo testing to meet FDA requirements, which effectively extended the original drug’s patent far beyond the statutory 17 years. Hatch-Waxman struck an artful compromise, allowing generic imitators to use the patent holder’s results for market approval. In exchange, the patent life on original drugs was extended up to five years. Since 1984, generics’ share of the market has risen from 19 percent to 50 percent — which has done wonders for drug prices. But it also makes the major research giants even more dependent on highly successful new products.

Making testing even more complex is the emerging pattern that different drugs work differently for different groups of people. One recent drug, for example, proved to have a significant effect in treating AIDS in African Americans but not in whites. Should the drug be approved only for use by blacks? Should it not be approved because it won’t help whites? The FDA hasn’t yet decided.

Another major impediment to drug development is that the FDA requires efficacy testing for each separate use. Often a drug marketed for one disease turns out to be effective in treating another. Yet the FDA still requires another trip through the bureaucracy. Bayer, for example, is not allowed to market aspirin as a prevention for heart disease, even though studies have shown it cuts the risk of coronary by half.

What the FDA does allow is for doctors to prescribe drugs for non-approved use on an informal basis. Yet this has produced a whole new round of drug scandals. Warner-Lambert is currently being sued by a “whistle-blower” who has accused the company of paying doctors to publicize Neurontin, an epilepsy drug, for a dozen other conditions, including pain, bipolar disorder, and restless-leg syndrome. Schering-Plough is facing criminal charges for marketing several of its drugs for unauthorized uses.

Rare “orphan” conditions such as restless-leg syndrome are underserved by pharmaceutical research, because the small market for a treatment cannot support the cost of FDA testing. Can Warner-Lambert be expected to spend millions of dollars to get Neurontin approved for each new condition? If a drug is already proved safe for one condition, why not allow its use in others?

One proposal has been to do away with the FDA’s efficacy testing. Doctors and patients could figure out for themselves whether a drug works — once its long-term safety has been established. The problem is this might open doctors to liability, which is something they don’t need right now. Another idea would be to allow private organizations to certify safety and efficacy, the way Underwriters Laboratory certifies electrical devices. This would at least break up the FDA’s monopoly mentality and alert it that there are people out there waiting for it to act.

This much is certain. A national regime of price controls for prescription drugs will play havoc with medical progress. When the Clinton administration toyed with price controls in the days of Hillarycare, the annual increase in drug-research funding fell to single digits for the only time in the last two decades.

Should research funding decrease, critics of the pharmaceutical industry argue, the National Institutes of Health could carry the burden of new discoveries. Indeed, they argue that NIH is already subsidizing the drug industry by doing basic research. However, NIH funds only $20 billion of research a year in all fields, while the drug industry spends $30 billion on biomedicine alone. Also, NIH confines itself to basic research and does not approach FDA testing. With the cancer drug Taxol, for example, NIH spent $32 million over 30 years testing fewer than 500 patients. In 1991, Bristol-Myers Squibb licensed the compound and spent $1 billion shepherding it through FDA approval. Only then did Taxol become a major cancer treatment.

America is virtually encircled by countries already imposing drug price controls to support their nationalized health care systems. Europe and Canada have dried up their homegrown drug research by fixing prices. Continental Europe now produces less than one-third of the world’s new drugs, even though the testing procedures there are less demanding. Prices are so out of line that American resellers have taken to purchasing American drugs abroad and importing them back into the United States for sale at discount rates.

That only makes it more important that the United States hold the line. Europe and Canada are essentially piggybacking on American medical research. Half the new drugs in the world are now developed in the United States. There is nowhere else to fall back on. If we start imposing price controls, the medicines we use today are the same ones we’ll be using 20 years from now.

William Tucker, a columnist for the New York Sun, is a fellow at the Discovery Institute.