Has pharma been doing R&D wrong all these years?

You can’t keep up on the latest biopharmaceutical industry news without hearing about the crisis in R&D productivity.  Basically, the amount of money being spent on R&D by companies has been growing at a rapid pace, but so far productivity, as measured by the number of new products that reach the market, has not been keeping up.  In fact, it’s been pretty flat over the last 20 or so year as the graph below illustrates (link to source).  As a result, the cost of getting a new drug approved by the FDA has been pegged at north of $1B when you include all the money spent on projects that go nowhere.

What’s the reason behind his trend?  Well, a number of theories have been put forth:

1. All the “low hanging fruit” have been picked.  I honestly find this reason to be pretty weak.  Sure, we probably don’t need another opioid receptor agonist/antagonist and I doubt there is much use in finding another M1-muscarinic receptor agonist, but considering how little we know about the biological systems that make up the human body I find the idea that we’re running out of easy targets laughable.  There are plenty of easy targets out there that would produce a plethora of new drugs, the problem is we don’t know what they are.

2. The strategic shift from away from innovation to financial returns has killed productivity. There is likely something to this point.  However, this trend didn’t just start happening in the mid-1990s.  I was recently speaking with a gentleman who is getting close to retirement after spending most of his life in pharma.  He mentioned how in the mid-1970s he was working for Searle and a new CEO came in who completely thrashed R&D in order to improve the bottom line.  I have no doubt that a strategy that is overly focused on profit maximization can reduce R&D productivity, but I don’t think that’s the root cause of today’s problems.

3.  The recent spat of mergers and acquisitions has killed morale among R&D personnel. Again, I think there is something to this, but mergers and acquisitions aren’t a recent phenomenum either.  I personally witnessed the level of morale at a big pharmaceutical company during the multiple mergers that happened in the early 2000s and yes, productivity took a nose dive.  However, I don’t think this is the root cause of the drop in R&D productivity.

At this point you might be asking “Well, what is it then?”  I don’t claim to have the final answer, but a recent paper in Nature Reviews – Drug Discovery gave me pause because it backed up a hypothesis that I’ve been thinking about for the past few years.

The most logical way to approach this problem is to ask the question: When were things better and what has changed since then?  If we look back to the golden age of pharma, say the 1950s – 1970s, we see incredible productivity.  Entire drugs classes were discovered during this time including the benzodiazepines, antipsychotics, synthetic opiods, antifungals, etc, etc.  Janssen Pharmaceuticals alone discovered over 70 new chemical entities over a 40 year period starting in the 1950s, with many of those occurring in the earlier years.

So what changed since then?  Well, the Nature paper discusses the shift in R&D strategy from phenotypic-screening to target-based screening.  In layman’s terms, it was the change from screening drugs based on the response they produced in living tissue or an organism to screening drugs based on the effect they produced on a drug target (typically a receptor or enzyme).  Phenotypic-screening is how the benzodiazepines were discovered.  The first benzodiazepine, chlordiazepoxide, was not made because they thought it would make for a good anxiety drug, it was an unexpected product that was produced during a chemical reaction.  The chemical was then administered to a laboratory animal (likely a mouse or rat) and the sedative effect was noted.

Contrast this with the drug discovery strategy that is typically seen today in pharmaceutical companies: researchers isolate a drug target that is believed to play a role in a disease and then the chemists and biologists go about making new chemicals that interact with the receptor in a particular way, optimizing for solubility, logP and all the other metrics that make for a good drug.  At this point they have no idea if the drug actually works, they only know it interacts with the target.  They then move to animal models of the disease and try to confirm efficacy, which if successful leads to the drug being tested in humans.

The key difference between these two drug discovery strategies is that the first (phenotypic-screening) ignores how the drug works and just focuses on if it works.  Target-based screening focuses on knowing how the drug works, not if it works (yes, I’m painting with a broad brush here, but bear with me).  Now, if you’re in the business of discovering new drugs that interact with biological systems that you have little understanding of, which makes more sense as a strategy?  Which is more likely to lead you to the discovery of a new class of drugs?  It’s really a choice between trying to expand on current knowledge (target-based screening) and throwing a hail-mary and trying to find something you never knew existed (phenotypic screening).

If you’re at all interested in this topic, I strongly encourage you read the Nature paper (it’s open-access) and look at some of the data that the authors uncovered.  They do a much better job of explaining the trade offs between the two methods and came up with some pretty interesting evidence that the shift away from phenotypic-screening has had direct consequences on R&D productivity.

Maybe it’s time for pharma to look to the past for guidance on how to suceed in the future?