How the government makes healthcare more expensive by cutting costs

Increasing healthcare costs are one of the most important issues facing the United States today.  Although the rate of increase has slowed down over the past few years, the amount of money spent on healthcare in the US has risen at an alarming rate over the past decade.   It’s not only an issue for private insurers as the impact of Medicare and Medicaid spending also threatens to balloon the federal budget.  Incredibly, cost savings measures instituted by the government have, and will, lead to further increases in healthcare costs.  How will this happen?  Lets focus on the treatment of cancer for a moment; the cause and effect should be clear.

When a cancer patient requires treatment such as chemotherapy, they are typically treated in one of two settings: (1) a community-based oncology clinic or (2) a hospital-affiliated clinic.  Community-based clinics are private clinics owned by the doctors who practice there and can vary in size from only a few physicians to 50 or more.  These clinics are run like small businesses with physicians paying themselves a salary and the clinic taking either a profit or loss at the end of the year. Hospital-affiliated clinics look very similar to community clinics from the outside, but differ in that the hospital system handles all of the accounting and the physicians who work there are typically paid a salary.

Another, very important difference between the two settings is that treating a cancer patient in a hospital-affiliated clinic is typically much more expensive than treating a patient in a community clinic.  Data gathered by Avalere shows that, on average, the cost of treating a cancer patient is 20% to 55% more in a hospital-based setting (table 3 in this report).

download

Now, if you wanted to reduce the cost of healthcare, it would seem prudent to encourage cancer patients to receive their care at community-based clinics.  If you’re saving roughly a third per patient on average, it adds up to a significant amount of money.  The reality is that the government has instituted two different programs that are pushing patients away from community-based clinics towards hospital-affiliated clinics.  And incredibly, both of these programs were instituted to reduce healthcare costs.

The first government program was an attempt to reduce the amount spent on the drugs used to treat cancer.  Typically, oncologists are reimbursed through “buy-and-bill.”  Physicians purchase a cancer drug using their own money and once they have used it to treat a patient, they bill the patient’s insurance company.  Insurers, both public and private, typically pay physicians more than what the drug actually costs  in order to cover some of the overhead associated with treating a patient.  This extra money is typically referred to as the “spread.”  In the past, the spread was quite generous; if a doctor bought a drug for $10,000, they might get an additional 10 or 20% ($11,000 or $12,000 back from the insurer).  As healthcare costs continued to rise, Medicare and Medicaid decided to reduce their reimbursement levels so as to provide as little as an additional 4.2% in spread and there is talk of lowering it further.  These changes have drastically reduced the revenue that community clinics take in and in many cases has forced doctors to sell or close their community-based practices.  A recent report has shown that in 2012 there was a 20% increase in both oncology clinics closing or merging with existing hospital systems.  Between 2005 and 2011, the percentage of cancer patients treated in the hospital setting increased by 150% (from 13.5% to 33%).  As the number of community-based clinics decreases, more patients get treated in the more expensive hospital-affiliated clinics.

The other factor driving patients to hospital-affiliated clinics is the 340b program.  Initially designed to assist hospitals who treat patients with little or no health insurance, the program allows certified hospitals to purchase out-patient drugs (such as cancer therapies) at a 23.1% discount (same discount Medicaid gets).  This certainly helps those hospitals who treat underinsured and uninsured patients, but the program has a catch: 340b hospitals can purchase all of their out-patient drugs through this program, whether they are used for uninsured or insured patients.  In addition, hospitals don’t have to pass along the savings to the insurer, they are allowed to pocket the difference.  It isn’t unusual for oncology therapies to cost more than $100K/yr per patient, so that 23.1% discount becomes a lot of money, all of which goes to the bottom line of the hospitals.  To give you an example, the Duke University Hospital system effectively doubled their profit margin on drugs to 53% for a gross profit of $70M.  This has created a huge incentive for hospitals to both become 340b certified and for them to attract oncology patients to their clinics.

The situation we now have is that cancer patients are being pushed out of the less expensive community-based setting as oncologists struggle to stay profitable and pulled into the more expensive hospital-affiliated clinics as hospitals seek to capture as much 340b business as possible.  Not exactly a great way to save money now is it?

Controlling healthcare spending will be a priority for the US in the coming decade.  In order for that to happen, we need a coordinated effort by the government and private insurers to find a way to incentivize not only quality care but also cost-efficient care.  Without an understanding of the economic pressures and incentives offered by the current system, this may prove a very difficult task.


A setback for the hygiene hypothesis

It’s always interesting when a clinical trial you’ve been watching reads out.  One of the first posts I wrote for this blog was about a clinical trial that Coronado Biosciences was running based on the hygiene hypothesis.  The company was testing the efficacy of T. suis ova (pig whipworm eggs) in treating Crohn’s disease.

19219_7336_5

Well, Coronado Biosciences just released the results of a phase 2 trial (TRUST-1) and they are not positive.  It was a double-blinded, placebo-controlled trial of 250 patients with moderate to severe Crohn’s disease, so a positive signal would have been pretty convincing evidence of the efficacy of the therapy.  The primary endpoint of the trial measured response using the Crohn’s Disease Activity Index (number of patients who see a >100-point decrease in CDAI) and the secondary endpoint measured induction of remission (number of patients who achieve a CDAI score of less than 150).  For both endpoints, there was no significant difference between the treated patient population and the patients who received a placebo.  This is not a “we almost made it” result, but rather a “this doesn’t work at all” result.

The company did see a small signal in patients with very high CDAI scores (> 290), but unfortunately it was not statistically significant.  Coronado blamed it on a higher than expected placebo response, which may at first glance seem like a cop-out, but is actually quite common in trials for relapsing-remitting inflammatory diseases.

Another European trial is underway (TRUST-2) so Coronado will have a second chance to prove the therapy works, but things don’t look very promising at this point.  Crohn’s disease is a pretty tough market to compete in as several effective therapies already exist.  Even if TRUST-2 has a positive read-out, it’s unlikely the efficacy would be enough to support commercialization.

Coronado is also running trials of T. suis ova in several other indications including ulcerative colitis, multiple sclerosis, autism and psoriasis.  I’ll be following those trials closely as a positive outcome would be an incredible jump forward in our understanding of the immune system and offer new therapies for patients who often have few other options.  I’ll keep you posted.

 

Follow-up: Only a few days after posting this, Coronado Biosciences halted their TRUST-2 trial due to lack of efficacy.  Not surprising, but certainly a nail in the coffin for T. suis ova in the treatment of Crohn’s disease.  It will be interesting to see how the other indications fare…


Big pharma’s “pay-to-delay” deals: Helping or hurting patients?

A few weeks ago it was announced that Jon Leibowitz has decided to leave his post at the Federal Trade Commission.  Jon has been the driving force behind the FTC’s attempt to ban so called “pay-to-delay” deals that have become increasingly common between branded and generic pharmaceutical companies.  The FTC claims that these deals are “anti-competitive” and basically amount to collusion between branded and generic companies.  According to the FTC, these deals cost the American public $3.5 billion per year in higher drug cost as cheaper generic drugs are delayed from entering the marketplace.  Jon’s efforts seem to have paid off as Senator Chuck Grassley recently introduced legislation banning the deals and the Supreme Court will soon decide if the deals are legal.

bribery

You might be wondering what these “pay-to-delay” deals are exactly, so lets look at a hypothetical example:

Let’s say a branded pharmaceutical company is selling a patent-protected drug with annual sales of $5 billion per year.  The company’s patent on the drug doesn’t expire until 2020 so they will continue to promote and sell the drug until then, upon which time a generic drug company will begin producing and selling a generic version (after gaining approval from the FDA).  Generic companies compete with each other to get their generic version approved first, since it comes with a 180 day exclusivity period where no other generic company can sell their version.  During the 180 days, the generic company will undercut the price of the branded therapy and grab a large part of the market share.  Since the generic drug company only has to do a fraction of the R&D that the branded company did, the 180 day exclusivity is very profitable.  After the 180 days are up, any generic drug company can sell their generic and the price falls to the point where there is little profit to be made.

Where thing get complicated is that generic companies often don’t wait until the branded drug’s patent expires.  Instead, they will challenge the validity of the patent in court and if successful, they will get the coveted 180 days of exclusivity and can start selling their generic version immediately.  As you can imagine, this potential payoff creates a lot of incentive to challenge a patent.

What does the patent holder do when their patent is challenged?  Since there are often billions of dollars of profit at stake, they fight it out in court by hiring some very expensive lawyers to argue that their patent is valid .  If the generic drug company has already started selling their generic version before the patent issue is settled (a so called “at risk” launch), the branded drug company can sue for damages if the patent is found to be valid.  Since the law allows for recovery of triple damages, losing a patent decision is what generic companies fear the most (for a great example of such an outcome, look no further than Pfizer’s request for $2 billion in damages  for Teva’s at-risk launch of Protonix).

As you can see, the stakes are very high for both companies if a patent challenge actually ends up being decided by a judge.  It’s an all or nothing outcome: one party will win big and one party will lose big.  It’s for this reason that many patent challenges lead to out-of-court settlements that include so called “pay-to-delay” deals.  In exchange for a payment from the branded drug company (either in the form of cash or other financial incentives) the generic drug company will agree to delay the launch of their generic version.  You can think of this as a way to “meet in the middle”.  Both companies get something out of the deal and it eliminates the risk of being on the losing end of a winner-takes all outcome.

However, is the FTC’s allegation that pay-to-delay deals delay entry of cheaper generic drugs and hurt the consumer true?  No.  The launch of the generic drug is only delayed in the sense it would enter the market later than if the generic company succeeded in invalidating the patent.  However, the reason why the generic company agrees to the pay-to-delay deal is because it doesn’t believe it will succeed in invalidating the patent.  If pay-to-delay deals were banned, the generic company would likely just pack up its bags and head home as the possibility of losing the patent challenge is more risk than they can tolerate.  If anything, pay-to-delay deals actually result in a generic drug entering the market sooner than it would have otherwise.

If pay-to-deal deals are banned and branded and generic companies lose the ability to “meet in the middle”, the availability of generic drugs will likely be delayed in many cases, which is exactly what the FTC is trying to avoid.  It will remain to be seen if Chuck Grassley’s bill passes into law, but in the mean time, keep an eye out for the Supreme Court decision.  The court’s decision will have far reaching consequences for the pharmaceutical industry and in the end the consumer.


Just how much profit is there in a new drug?

It seems like the debate over the cost of patented drugs never ends: patients complain about the ever increasing prices and the drug companies complain about the high costs of R&D.  However, one question you rarely see asked is how much does it really cost to make that one dose of a new drug?  How much of the price is profit?

Now keep in mind when I say profit I really mean contribution margin.  Contribution margin is price minus the variable cost of producing a single unit and ignores the fixed costs which in the case of a new drug includes the cost of R&D, marketing and the equipment used to manufacture the drug, just to name a few.  It’s called a contribution margin because it’s the part of the revenue that contributes to paying for the fixed costs.  A good example of a contribution margin comes from the entertainment industry: once you’ve shot a film, paid the actors, edited the film and put it on a DVD, what is the cost of producing another copy of the DVD compared to the price it’s sold for?

In the world of pharmaceutical manufacturing, the variable cost of producing a unit of drug can vary a great deal.  Relatively simple drugs, like aspirin, can be very cheap to manufacture, while the complex biological drugs, like Herceptin, can be relatively expensive to manufacture.  And the costs don’t stop there.  What patients typically refer to as a “drug”, is known as a “drug product”, that is the tablet or injection that patients receive.  Before a drug is made into a tablet or injection, it’s known as API (active pharmaceutical ingredient) and is typically a powder of some sort.  Since you can’t just give a patient a bottle of powder to take, you need to turn into into a drug product first and that can be either inexpensive (making a tablet) or very expensive (making an inhaler for an asthma medication).

BG-12 (Dimethyl fumarate)

OK, so now that we have all of the industry jargon out of the way, let’s look at an actual example using (very) rough numbers.  Biogen Idec has a new drug in clinical trial for multiple sclerosis known as BG-12.  BG-12 is very unique in that it is an incredibly simple and inexpensive drug.  BG-12 is dimethyl fumarate, a chemical that is manufactured on a huge scale in ton quantities.  To give you an example, here is a Chinese chemical supplier who is offering dimethyl fumarate for sale for $1 – $50 per metric ton, but you have to buy at least 2 tonnes and if you need a lot, they can supply up to 2000 metric tons per year.  Supply is obviously not an issue here.

If BG-12 is approved by the FDA, patients with multiple sclerosis would receive 240 mg of the drug 2 or 3 times a day (based on clinical trial data).  Using rough approximations, let’s calculate how much it would cost to treat one MS patient for a year using BG-12.

Cost of dimethyl fumarate: $25/ton (let’s choose the average cost)

Dose of BG-12 per day: 750 mg (let’s go with the high dose)

Total dose of BG-12 per patient per year: 750 mg * 365 days = 274 g

Total cost of BG-12 per patient per year: $25 * (274g /1,000,000 g) = $0.007

Based on the commodity price of dimethyl fumarate, it would cost less than 1 cent to treat an MS patient with BG-12 for one year.  Of course you need to turn the dimethyl fumarate into tablets and put them in a bottle, so let’s assume all those added costs are $0.05/tablet.  I honestly have no idea how much it actually costs to make tablets on an industrial scale, but it can’t be much considering you can buy a bottle of 100 generic Tylenol tablets at Target for about $3 or $0.03/tablet all in.  However, we’ll estimate on the high side and say $0.05/tablet.  If we add in the cost of the dimethyl fumarate ($0.007/tablet) and round up, we get $0.06/tablet or $65.70 to treat an MS patient for a year with BG-12.  Not very expensive!

Now we know the variable cost to produce a year’s treatment of BG-12, but how much will it sell for? Well, BG-12 looks like it will be a very effective drug without a lot of side-effects, so Biogen certainly isn’t going to price it less than treatments that are currently on the market.  If we take a look at this article from Pharmalot, we can see that the annual cost to treat an MS patient with current drugs ranges from approximately $36,000 to $48,000 per year.  Let’s assume that Biogen will match the highest price therapy (Gilenya).

Contribution margin = revenue – variable cost

 = $48,000 – $67.50

 = $47,932.50 or a 99.85% contribution margin

Now before you start screaming about greedy pharmaceutical companies, remember, this is just the contribution margin.  Current estimates put the cost of developing a new drug somewhere between $500M and $2B dollars.  Biogen will have to treat a large number of patients for many years to recoup their development costs and begin making a profit on BG-12 (for example: treating 1000 patients with BG-12 for 5 years would pay off $1B in development costs).  This is why it’s estimated that only 1 in 3 drugs that hit the market actually make a profit for the company; you may have an incredible contribution margin on a drug, but if all of the margin goes into paying off the fixed costs that went into development, you may end up not making any profit at all.

The other lesson from this exercise is now you can see why the cost of generic drugs is so low.  A generic pharmaceutical company incurs only a fraction of development costs that an R&D-based company does and thus can charge a much lower price (typically 80-90% lower), receive a much smaller contribution margin and still make a profit.

Update: After some delay, the FDA has promised to make a decision on BG-12 (brand name: Tecfidera) on March 28th, 2013.


Vivus’ new obesity drug: Will it get scooped by generics before it even hits the market?

This past Wednesday, the FDA approved Belviq, the recently renamed obesity drug from Arena.  It wasn’t a huge surprise since the FDA review panel gave a strong 18-4 vote of support for its approval.  Interestingly, it doesn’t appear that the REMS requirement for Belviq is all that onerous (something I commented on in my last blog post), so Arena looks like they have a real winner on their hands.

The next obesity drug up for approval is Qnexa from Vivus.  I’ve written about Qnexa in the past (here and here) and it’s been a long journey for Vivus with an initial rejection from the FDA, negotiations over new clinical trial data requirements and a refiling of the NDA.  However, after all that, it looks like Qnexa will be approved based on the 20-2 vote by the advisory panel.  And that’s a good thing!  The clinical trial data for Qnexa is actually much more positive than that for Belviq.  Almost 50% of patients taking Belviq lost at least 5% of their body weight (average of 5.7% overall), while 50% of patients taking the highest dose of Qnexa lost at least 15% of their body weight (average of 14.4% overall).  You’re probably thinking, “Wow!  Vivus has the obesity market cornered!!”.  Not so fast, Vivus may get scooped by generics before they even sell their first pill.

Unlike Belviq, which is a new chemical entity and covered under numerous patents which prevents other drug companies from making and selling the active ingredient, Qnexa is a combination of two drugs that are already available as generics (phentermine and topiramate).  The highest dose of Qnexa contains 15 mg of phentermine and 92 mg of topiramate.  A quick internet search reveals you can get a bottle of 100, 15 mg tablet of phentermine for $142 and a bottle of 120, 100 mg tablets of topiramate for $204, a cost per day of slightly over $3 or $90/month.  Yikes!  That’s some stiff competition for a branded drug where a price of $150-200 for a month of therapy is seen as a being “on the low end”.

However, Qnexa does an advantage that the generic drugs do not: Qnexa is a controlled-release combination of phentermine and topiramate (this formulation is no doubt patented).  This has the benefit that patients can take the drug less frequently and the levels of the drug in the body are much more stable since the dose is slowly release over a period of time.  When treating obesity, where food craving and appetite can vary over the course of a day, this is significant advantage.  However, it remains to be seen if a physician could simply have the patient split the dose of generic drugs, take it twice a day and see similar results as the sustained-release Qnexa.  I have no doubt that physicians will give it a try (or are already trying it).

Two factors will determine the impact of generic phentermine and topiramate on Qnexa’s sales: the price of Qnexa and how insurance companies respond to it.  If Qnexa hits the market at a modest premium, say $100 – $120/month, I predict that insurance companies won’t balk at it.  They won’t like it, but they’ll also judge the difference in price as too small to devote resources to controlling.  However, if the price goes much higher, say $150+, insurance companies will take notice and start to implement some controls that could significant curtail Qnexa sales.

Now insurance companies can’t force a physician to prescribe generic phentermine and topiramate instead of Qnexa, since neither is approved for use in weight loss (physicians on the other hand, are free to prescribe drugs for off-label use).  However, insurance companies have the ability to heavily incentivise the use of particular drugs through things like co-pays, step-edits and prior authorizations.  The really big risk for Qnexa is the co-pay.  Insurance companies typically have “tiers” for their prescription drug coverage that look something like this:

Tier 1: Generic drug: $10 co-pay

Tier 2: Preferred branded drug: $40 co-pay

Tier 3: Non-preferred branded drug: $65 co-pay

Tier 4: Specialty drug: 20% co-insurance

If we imagine a scenario where Qnexa is priced at $120/month (which is way lower than what I think they’ll price it at), insurance companies won’t complain too much about the cost and may choose to put it on tier 2.  However, patients will see a significant difference in out-of-pocket expense for Qnexa vs. generic phentermine and topiramate.  If a doctor prescribes Qnexa, the patient would pay around $40/month for the prescription.  If a doctor prescribes generic phentermine and topiramate, the patient only pays $20 per month ($10 for each prescription).  Over a year, that’s a difference of $240, not a lot, but enough to provide some incentive to take the generics.  If Qnexa gets price higher than $150, insurance companies may put it on tier 3, where patients are paying $45/month more than the generics for a difference of $540/year.  That is a significant amount of money in most people’s books and enough to get patients to ask their physician to prescribe the generic combination.

Either way you cut it, Qnexa, despite have a clear clinical benefit for obese patients, may have a hard time reaching the multi-billion dollar a year sales estimates that have been floating around.  When the FDA makes its final call on Qnexa on July 20th, keep an eye out for the price Vivus settles on because it will have a huge impact on how successful (or unsuccessful) the drug becomes.


If the FDA approves the new obesity drugs, will REMS crash the party?

Wow!  A lot has happened since my last blog post.  Two of the three new obesity drugs up for approval (Qnexa from Vivus and Lorquess from Arena) both received positive responses from their respective FDA advisory panels despite all of the pessimism from outside observers.  Of course, both drugs have yet to be officially approved by the FDA and they could still be rejected, just ask Intermune.

However, I refuse to be one of the pessimists and I do believe that at least the one of the two drugs will get the FDA’s stamp of approval, if not both.  However, before everyone breaks out the champagne and starts celebrating, we need to talk about something called REMS.

REMS stands for Risk Evaluation and Mitigation Strategy and was brought into effect through the 2007 Food and Drug Administration Amendments Act.  It was introduced to allow the approval of drugs that have a both an obvious benefit, but also substantial risks.  Think of it as a finger that tips the risk-benefit scale more to the benefit side.  REMS provides an additional level of control over drugs that otherwise couldn’t be approved for unrestricted use in the general population. If you’re interested in a more detailed overview of REMS, there are some great summaries here and here.

I’m sure you’re wondering what these controls look like. Well, they come in a number of different flavors.  Here is a list of typical REMS requirements, starting from the least onerous:

  1. Medication guide
  2. Communication plan
  3. ETASU (Elements to assure safe use)
  4. Implementation system

The most basic REMS is a medication guide.  The guide provides additional information for the prescriber so that they are completely informed of the risks and benefits of the drug and can fully inform the patient taking the drug.  It lays out the concerns around adverse events and procedures the prescriber can follow to minimize those risks.  The manufacturer provides a draft guide to the FDA, who, if satisfied it contains all the necessary information, approves it.

Pretty simple so far, right?  Medication guides are the most common type of REMS and account for around 2/3 of all the drugs that have REMS requirements.  The other 1/3 aren’t so lucky, some of those have ETASU requirements.

The types of controls within an ETASU can also vary.  They can be as simple as providing formal training for the prescriber so that he or she understands how the drug should be administered, how to educate the patient and how to recognize and report any adverse events.  However, on the other end of the spectrum, ETASU controls can also limit who can prescribe the drug, which pharmacies can fill the prescriptions and where the drug can be administered.  Where the problem comes in is that you can’t just have these controls, you need mechanisms in place to make sure they are being followed.  This requires pharmacists, nurses, physicians and hospitals to take on an incredible administrative burden and a burden that they don’t get reimbursed for.  That is how REMS can really put the squeeze on a new drug.  If you are a physician who has the choice of prescribing a drug where all you do is write a script versus a drug where your staff has to spend an hour filling out forms just so that the patient can drive across town to the one pharmacy that stocks it, which one would you prescribe?  Even if a drug with an onerous REMS is the only drug available to treat a condition, how often  will physicians think “I would love to use this drug, but I can justify the burden on me and my staff?”

Both Qnexa and Lorquess have the potential to be approved with a REMS requirement.  Qnexa contains topiramate, which is known to increase the risk of cleft palate in mothers who take the drug while pregnant.  REMS seems like a great way to keep Qnexa out of the hands of pregnant women, similar to Revlimid (which has an incredibly onerous REMS).  Lorquess has the FDA concerned about a possible increased risk of cardiovascular events and REMS would be a great way to collect patient data (through a registry) to determine if that risk is meaningful.

If either of these drugs ends up with strict REMS requirements, you can expect all those financial analysts to quickly revise their revenue estimates and the price of the companies’ stock to react accordingly.  So save your champagne until the FDA decisions come out on June 27th (Lorquess) and July 20th (Qnexa).  You may just end up saving it for New Year’s Eve.


Are small biotechs more productive because they have no other choice?

Over the last few years, much has been written about the poor R&D productivity of the big pharma companies.  There is a bit of a controversy as to whether or not small biotechs are truly more productive, but one can’t deny that many of the new drugs being launched didn’t originate from the labs of companies like Pfizer or Sanofi, but rather small, resource-constrained biotechs like Seattle Genetics (Adcetris) or Optimer (Dificid), to give a couple of examples.

Why is this?  Many theories have been bantered around such as biotech’s more collaborative culture, the agility of smaller companies or the science vs. financial-focus.  I think all of these ideas have merit, but I would like to examine another cause that I think many people overlook: small biotech companies don’t have a lot of other options.

Now before you say “Yeah, I’ve heard this before, it’s do or die at the small biotechs”, hear me out because that’s not quite where I’m going with this.

Let me start with an anecdote:  I was working on a diabetes drug at a big pharma company and during one of our meetings the project lead put up a slide that showed how far behind we were compared to the competition.  Our lead compound was in pre-clinical testing, while two of our competitors were already in phase II trials.  Yikes, we were at least 3-4 years behind.  Why was that?  It wasn’t because our program was having trouble, but rather because the program had been put on “pause” almost 5 years before while our resources went to other “higher priority” projects.  The project lead was understandably quite frustrated as he had been involved in that initial research and was now being asked “can you please do this faster?”

This type of thing goes on all the time at the big pharma companies; with research budgets in the billions of dollars, R&D portfolios are constantly re-evaluated and resources are reallocated.  From the top, this makes sense since why would you put $500M into project A, when putting $500M in project B gives you a bigger NPV (at least according to your model)?  The problem is, portfolio strategy is not an exact science and when one of those assumptions you made in your forecasting changes next month, your resource allocation can end up looking completely wrong.

Now contrast that with a small biotech company.  A handful of scientists and business folks find some promising technology and decided to develop it.  They spend months trying to line up financing and when they do, they have a nice pile of cash that has one purpose: develop the idea they started with.  Now this isn’t to say that development plans don’t change, because they do, but the thought never crosses their minds “Hey, maybe we should stop working on this compound and try something different, we can always come back to it.”  Even when things appear gloomy and failure seems almost certain, there is no real path other than forward.  They aren’t competing against another project for resources because there is no other project.  There is no opportunity to put things on “pause”.  Projects keep going until they either fail or the money runs out.

What is the consequence of this lack of options?  Projects that one day seem like a dead-end keep getting funded (for a while at least) and some of those turns out to be great ideas after all.  The same project in a big pharma company?  It gets shelved and may never see the light of day again.

How can big pharma fix this problem?  Well, the answer isn’t that straightforward.  As I mentioned, portfolio strategy is not an easy thing to get right and handing over a multi-billion dollar per year R&D budget to scientists to play with isn’t in the realm of possibility either.  What we are seeing is a strategy by big pharma companies to cut R&D budgets and use that cash to support academic research and emerging biotechs.  Is this the right strategy?  Only time will tell.


Pfizer’s Lipitor strategy worked… pretty well!

It’s been almost 3 months since Pfizer’s Lipitor lost exclusivity so it’s not a bad time to assess how the company’s strategy of maintaining market share has worked so far.  Keep in mind that Pfizer really broken new ground with this strategy.  Most R&D-based pharmaceutical companies practically abandon all sales and marketing efforts once a drug loses patent protection since within a month or two almost all of their market share is wiped out by the lower priced generics.  However, Lipitor is not your typical drug as shown by its $10B+ per year revenue numbers.  If Pfizer could keep even 10% of that market share, they would have a revenue stream that a lot of pharmaceutical companies would kill for.

Before we look at the numbers, how about a quick primer on how the generics market works?  When an R&D-based pharmaceutical company first gets a new (small-molecule) drug approved, it’s via a New Drug Application (NDA).  Based on either the patents around the new drug, or the market exclusivity awarded through the NDA, the company has the sole right to sell the drug.  The logic behind this right is that it allows a company to recoup the costs associated with R&D over a defined period of time.  The period of exclusivity ends when a generics company gets an ANDA (Additional New Drug Application) approved after the patent “runs out” (or sometimes before it runs out by proving the patent is invalid).

Now here is the important part.  The ANDA has its own period of exclusivity.  The first generics company to get an ANDA approved gets 180-days of exclusivity as the sole provider of a generic alternative to the branded drug.  Again, the logic behind this is to provide a financial incentive to offset the costs of getting an ANDA approved and that incentive is substantial.  During the 180-day period, the price of the drug drops only 10-20% (so the generic manufacturer gets almost the same profit margin as the branded-manfucturer did, but only for 180 days).  Once that 180-day period of exclusivity runs out, any generics company can get an ANDA approved and sell the drug, thus competition drastically increases and drug prices drop to maybe 10-30% of the branded drug’s price.  At this point, profit margins are razor-thin and the drug is basically a commodity.  Due to the rather steep price cuts that come along with generics, the brand name drug typically loses all of its market share within a month or two of the first generic hitting the market, as most brand name manufacturers have little interest in competing on price.

In the case of Lipitor, Ranbaxy was awarded with the first ANDA approval and with a little help from Teva, they were able to overcome some manufacturing (and regulatory) difficulties and got their generic version of Lipitor to the market just after Pfizer’s last patent ran out.  The other generic was a so-called “authorized generic”, which is in fact a generic version of Lipitor produced under the approval of Pfizer (which the rules allow).  That version is produced by Watson and Pfizer gets a pretty nice slice of that pie as a result of the arrangement (70% of revenues according to this article).

Now with all the background out of the way, how has Pfizer’s strategy fared so far?  Pretty good.  As of mid-February, Pfizer still has approximately 41% market share of all atorvastatin prescriptions.  If we run some rough numbers based on Lipitor sales for 2010 ($10.7B), a 41% market share would bring in over $2B in revenue for the 180-day ANDA exclusivity period (the only time Pfizer has a chance of keeping market share).  Lipitor had been slowly losing market share even before the patent expired, so let’s assume a more conservative $1.5B.  All of the effort that Pfizer put into keeping market share (PBM contracts, co-pay cards) doesn’t come cheap, so let’s knock the figure down to $1B.  However, Pfizer’s cut of sales from Watson’s authorized generic (which by some simple math has about 20% of the market) probably pushes that up to $1.25B.

Not bad at all!  Rather than leaving Lipitor to the generics companies, Pfizer spent a little time and money and has successfully held onto a sizeable chunk of the market and gets to put another $1B or so in the bank.  A wise investment by any stretch of the imagination.

What will be interesting to see is if any of the other R&D-based pharmaceutical companies follow suit.  There are some big drugs going off patent in 2012 (Seroquel, Plavix, Singulair) and Pfizer may have just proven that a little effort can provide some big pay offs.  Keep a look out for more stories like this in the coming year!

UPDATE (3/15/2012): Adam Fein over at Drug Channels just put up a great post about Pfizer’s Lipitor strategy and it has some more recent data.  I suggest you check it out!


Has pharma been doing R&D wrong all these years?

You can’t keep up on the latest biopharmaceutical industry news without hearing about the crisis in R&D productivity.  Basically, the amount of money being spent on R&D by companies has been growing at a rapid pace, but so far productivity, as measured by the number of new products that reach the market, has not been keeping up.  In fact, it’s been pretty flat over the last 20 or so year as the graph below illustrates (link to source).  As a result, the cost of getting a new drug approved by the FDA has been pegged at north of $1B when you include all the money spent on projects that go nowhere.

What’s the reason behind his trend?  Well, a number of theories have been put forth:

1. All the “low hanging fruit” have been picked.  I honestly find this reason to be pretty weak.  Sure, we probably don’t need another opioid receptor agonist/antagonist and I doubt there is much use in finding another M1-muscarinic receptor agonist, but considering how little we know about the biological systems that make up the human body I find the idea that we’re running out of easy targets laughable.  There are plenty of easy targets out there that would produce a plethora of new drugs, the problem is we don’t know what they are.

2. The strategic shift from away from innovation to financial returns has killed productivity. There is likely something to this point.  However, this trend didn’t just start happening in the mid-1990s.  I was recently speaking with a gentleman who is getting close to retirement after spending most of his life in pharma.  He mentioned how in the mid-1970s he was working for Searle and a new CEO came in who completely thrashed R&D in order to improve the bottom line.  I have no doubt that a strategy that is overly focused on profit maximization can reduce R&D productivity, but I don’t think that’s the root cause of today’s problems.

3.  The recent spat of mergers and acquisitions has killed morale among R&D personnel. Again, I think there is something to this, but mergers and acquisitions aren’t a recent phenomenum either.  I personally witnessed the level of morale at a big pharmaceutical company during the multiple mergers that happened in the early 2000s and yes, productivity took a nose dive.  However, I don’t think this is the root cause of the drop in R&D productivity.

At this point you might be asking “Well, what is it then?”  I don’t claim to have the final answer, but a recent paper in Nature Reviews – Drug Discovery gave me pause because it backed up a hypothesis that I’ve been thinking about for the past few years.

The most logical way to approach this problem is to ask the question: When were things better and what has changed since then?  If we look back to the golden age of pharma, say the 1950s – 1970s, we see incredible productivity.  Entire drugs classes were discovered during this time including the benzodiazepines, antipsychotics, synthetic opiods, antifungals, etc, etc.  Janssen Pharmaceuticals alone discovered over 70 new chemical entities over a 40 year period starting in the 1950s, with many of those occurring in the earlier years.

So what changed since then?  Well, the Nature paper discusses the shift in R&D strategy from phenotypic-screening to target-based screening.  In layman’s terms, it was the change from screening drugs based on the response they produced in living tissue or an organism to screening drugs based on the effect they produced on a drug target (typically a receptor or enzyme).  Phenotypic-screening is how the benzodiazepines were discovered.  The first benzodiazepine, chlordiazepoxide, was not made because they thought it would make for a good anxiety drug, it was an unexpected product that was produced during a chemical reaction.  The chemical was then administered to a laboratory animal (likely a mouse or rat) and the sedative effect was noted.

Contrast this with the drug discovery strategy that is typically seen today in pharmaceutical companies: researchers isolate a drug target that is believed to play a role in a disease and then the chemists and biologists go about making new chemicals that interact with the receptor in a particular way, optimizing for solubility, logP and all the other metrics that make for a good drug.  At this point they have no idea if the drug actually works, they only know it interacts with the target.  They then move to animal models of the disease and try to confirm efficacy, which if successful leads to the drug being tested in humans.

The key difference between these two drug discovery strategies is that the first (phenotypic-screening) ignores how the drug works and just focuses on if it works.  Target-based screening focuses on knowing how the drug works, not if it works (yes, I’m painting with a broad brush here, but bear with me).  Now, if you’re in the business of discovering new drugs that interact with biological systems that you have little understanding of, which makes more sense as a strategy?  Which is more likely to lead you to the discovery of a new class of drugs?  It’s really a choice between trying to expand on current knowledge (target-based screening) and throwing a hail-mary and trying to find something you never knew existed (phenotypic screening).

If you’re at all interested in this topic, I strongly encourage you read the Nature paper (it’s open-access) and look at some of the data that the authors uncovered.  They do a much better job of explaining the trade offs between the two methods and came up with some pretty interesting evidence that the shift away from phenotypic-screening has had direct consequences on R&D productivity.

Maybe it’s time for pharma to look to the past for guidance on how to suceed in the future?


More Lipitor news: Ahhh… that’s why independent pharmacists are mad!

Yet another updated on the Lipitor-is-going-generic saga.  Rather than reiterating what I said in prior blog posts, I’ll just add some interesting tidbits of information.

Over at the drug distribution blog DrugChannels (highly recommended), Adam Fein posted some very interesting commentary on the Lipitor story.  When the news about Pfizer’s agreements with major PBMs to get preferential treatment for Lipitor, even after the generics became available (in some cases the PBM wouldn’t reimburse for the generic at all), a group called Pharmacists United for Truth and Transparency (PUTT) had this to say:

The statement called the move “a blatant attempt” by benefit managers to keep Pfizer’s discount while employers still have to pay the full price of the brand-name drug.

Hmmm…. that’s awfully heartwarming that a group of pharmacists decided to look out for employers who offer drug coverage.  However, if you dig a little deeper, you’ll see there is a “healthy dose of economic self-interest” in play here, as outlined in Adam Fein’s blog.  What is of particular interest is this chart Adam put together…

Now things become a little clearer!  PUTT says they are outraged that employers will be stuck paying higher drug prices if Lipitor is used instead of the generic (which, by the way, they have no proof of), but I would hazard to guess that some of their anger comes from the fact that they are missing out on those juicy margins they usually make during the 180-day exclusivity period.

If you want to see how contentious this issue of pharmacy margins can be, check out the comment section of a another blog post by Adam here.  Who knew drug distribution strategy could elicit such emotion?


Follow

Get every new post delivered to your Inbox.