A new study focuses on "the wholly artificial 'fact' of average R&D costs per new drug"
Michael Hiltzik, LA Times, 4/3/2011
A new study systematically dismantles the industry's claim that the research and development cost of bringing a new drug to market is $1.3 billion.
Every time I come across a big-number statistic about the size or significance of some industrial activity, my nose wrinkles.
You know the figures I mean: The porn business takes in $10 billion to $14 billion a year. California's marijuana harvest is worth $14 billion a year, making it the state's biggest cash crop. NCAA March Madness costs employers $1.8 billion in lost productivity.
Figures like these have several things in common: They're eye-catchingly big, they're unverifiable by empirical means and they reek of fakery.
The statistic that may be most hazardous to your health is one pegging the research and development cost of bringing a new drug to market at $1.3 billion. Its purveyor is the Pharmaceutical Research and Manufacturers of America (PhRMA), which exploits the number's shock value to secure its lobbying agenda on Capitol Hill.
Tax breaks for drugs for rare diseases? Faster drug approvals by federal regulators? Stronger protection against competition from generics? All these goals have been achieved, based at least partially on the claim that drug makers require huge profits to fund R&D.
The supposedly high cost of research and development is also cited to argue against the reimportation of cheap drugs from Canada and direct negotiation over drug prices by Medicare.
These arguments are backed by truckloads of cash: Big Pharma has been the biggest spender on Washington lobbying of any industry, laying out $2.1 billion over the last dozen years to get its way, according to congressional figures.
The industry's R&D claim has been questioned for years, but seldom as thoroughly as in a recently published paper that calculates the true mean R&D cost as less than $60 million per drug in 2000 dollars ($76 million today).
The study's authors, Donald W. Light of the University of Medicine and Dentistry of New Jersey and Rebecca Warburton of the University of Victoria in Canada, systematically dismantle what they call "the wholly artificial 'fact' of average R&D costs per new drug" by removing inflated multipliers and calculating the tax breaks drug companies get for their R&D, among many other steps.
And they underscore that the industry's estimate always has been based on raw data the drug companies keep confidential. That's a major issue because the industry has an obvious incentive to maximize its R&D claims; this way, they can't be double-checked.
"The most important takeaway is that nobody knows the real cost of R&D because no one has seen the data," Light told me last week.
The basis for the industry's estimate, and the main target of Light and Warburton, is a 2003 study by the Center for the Study of Drug Development, an industry-funded institute at Tufts University. That study's authors, led by the center's director of economic analysis, Joseph A. DiMasi, determined that the capitalized cost of R&D per new drug was $802 million in 2000; industry lobbyists updated that figure for inflation to $1.32 billion as of 2006.
Light and DiMasi have been taking potshots at one another in peer-reviewed journals and other venues for a long time. Tufts responded to the most recent broadside with an exasperated-sounding statement that the claims by Light and Warburton had been "thoroughly rebutted" in 2005 and that the latest paper had almost nothing new to say.
The university vouched for the "scholarship, integrity, and validity" of its published paper, and it's fair to say that the DiMasi study is a very sophisticated analysis of pharmaceutical R&D financing that openly sets forth its assumptions and limitations.
The trouble with the Tufts paper isn't so much its authors' work, it's that their findings have been grossly distorted by drug industry lobbyists to make a claim the study doesn't support.
DiMasi himself isn't entirely comfortable with how PhRMA characterizes his findings. "I try to stay away from language that refers to 'single drugs' or 'cost per drug,' " he told me. "It's a little ambiguous. People who are not familiar with the studies might make the wrong interpretation."
But as Light and Warburton point out, there are other issues with the Tufts study. The biggest one is that the raw data are secret. The study is based on research and development costs for 68 unidentified drugs, provided to the researchers in confidence by 10 unidentified drug companies. Therefore it's impossible to verify which drugs were used in the study, whether they were truly representative of all Big Pharma research during the study period of 1990 to 2001, or if the cost data provided by the companies are even credible to begin with.
DiMasi assured me that his survey's drugs are representative of the industry as a whole in terms of the risk and expense of their R&D, and he pointed out that his findings were validated by a 2006 study using a separate, public drug database. But the authors of that study, who were researchers at the Federal Trade Commission, stated in their paper that they couldn't really be sure that their data were comparable. Of course, a study that doesn't reveal what it measured or provide a way for outsiders to reproduce its findings has an insurmountable flaw. That alone should render the Tufts study ineligible for use as the basis for any policymaking.
Another issue is that the cost figure produced by DiMasi and his team includes "opportunity costs" — that is, the potential income the drug companies might have made on other investments, such as equity securities, if they hadn't bothered to tie their money up for years developing drugs and getting them to market. The authors essentially doubled their calculations of out-of-pocket spending to accommodate these speculative lost profits. By financial alchemy, in other words, they made $403 million in tangible spending look like $802 million.
DiMasi contends that opportunity cost is relevant to his findings because "it shows what it costs to have sufficient incentive to develop a new drug." Light and Warburton, however, argue not only that the Tufts opportunity-cost multiplier is far too generous, but that it reflects a nonexistent choice: a company in the innovation business doesn't have the option of not investing in R&D. A drug company that leaves its money in a securities account isn't a drug company, it's a hedge fund.
What's especially questionable is treating these "foregone returns," as the Tufts researchers would call them, as the equivalent of out-of-pocket expenditures. When laymen hear that it costs $1 billion to develop a drug, they presume that the money is cold cash, possibly with an inflation factor thrown in — not that half of it is profits a firm might have earned by not investing in research at all.
In any case, the profit margins of major drug companies have been running as high as 49%, which suggests that the industry makes a lot more from developing and selling drugs than it could in the stock market. A Big Pharma CEO who earns even 10% on stocks and bonds while his rivals earn 49% by hawking painkillers and sedatives will become an ex-CEO faster than you can say "9 out of 10 doctors recommend."
Here too the fault lies not in the Tufts findings, but in their distortion by the drug industry. DiMasi's study proposed a reasonable standard for judging the potential yield of a long-term investment against a fixed return; it's the pharmaceutical lobby that, with staggering dishonesty, misrepresents this theoretical metric as if it's cold hard cash.
An important problem with the industry's citing a single figure to represent all drug R&D is that drug R&D isn't monolithic — the costs vary widely by the type of drug. The FTC study largely used DiMasi's methodology to conclude that average development costs ranged from $479 million for an AIDS drug to $936 million for an arthritis medicine. DiMasi himself published data in 2004 showing similar variations among therapeutic categories.
That said, the DiMasi team did produce a sophisticated study of the overall costs of pharmaceutical R&D. It's the industry's lobbyists who have caricatured it as a finding of the "average cost to develop one new drug." The lobbyists would have you believe that the cash cost of inventing and testing the contents of every one of those amber vials in your medicine cabinet is $1.3 billion, and consequently that any policy that cuts into drug company profits will mean less R&D and fewer lifesaving medicines.
Light and Warburton have done well to deconstruct how the drug industry contrived this all-important claim about R&D costs. Is anybody listening? When I asked Light if he's heard any reaction from policymakers since his paper was published in February, he replied, "I'm not getting the impression that members of Congress are paying attention."
Independent findings like his are easily overwhelmed by the tide of Big Pharma's dollars. "They rule the airwaves," he said.
Michael Hiltzik's column appears Sundays and Wednesdays. Reach him at email@example.com, read past columns at latimes.com/hiltzik, check out facebook.com/hiltzik and follow @latimeshiltzik on Twitter.