Cancer update: Opportunity costs

This is part of a series of posts about an update on the current state of cancer research, as I see it after attending the annual meeting of the American Society for Clinical Oncology (ASCO) this year.  In doing so, I'm trying to communicate impressions from a highly technical field into a format that is useful for a more general audience.  Therefore, I'm going to take a minute to talk about survival curves and what they mean in the context of clinical oncology.

I have young children, and one concept we try to help them understand is that they only get to choose among available options, not the universe of all imaginable options. For example, I'll offer them pancakes or cereal for breakfast and one of them will state (with finality) "I'm having muffins." When I point out we don't have muffins he'll double down, "I want a blueberry muffin."  What they have not yet learned, and what we try to teach them, is that they don't get to choose from a universe of all imaginable options, but rather from the universe of all available options.  In the following examples, I'm going to use made-up data that is fairly representative.  Partly because I don't want to steal anyone's data, but also because these graphs represent trends I saw quite a bit last weekend.  A common survival curve graph I saw at ASCO might look something like this:

By way of orientation: This is a survival curve, so the X-axis represents time (in weeks), and the Y-axis represents the percentage of the population that has 'survived' without experiencing some event.  This might be actual survival (% of patients not dead), or more commonly it might be percent of patients whose disease hasn't gotten worse, which we call progression-free survival (PFS).

At first glance, this chart is both encouraging and discouraging.  It's encouraging because it represents a clear survival advantage for those who got the drug versus those who didn't.  We're pushing back cancer, if even only a little bit!  (In most cases, this will be a specific type or sub-type of cancer, such as non small-cell lung cancer, NSCLC.)  In a more general sense, it's a bit discouraging.  Looking at this graph, it looks like after about a year the survival difference was about 55% on treatment versus 35% off.  Another way to look at it is to say it took 54 weeks to hit 50% survival on treatment, versus 36 weeks off treatment.  That's great and all, but after two years the benefit disappears entirely.

If we have to rely on slight, gradual improvements to treatment where patient survival is improved four to six months at a time, we're going to be hacking away at cancer for many decades to come.  Sure, we're making progress, but it's at a glacial pace.  Why are we wasting our time researching therapies like this one, and not pushing for a cure?

In part this comes down to the lesson I alluded to above about not getting to choose among all imaginable options.  We'd like to test a magic bullet that cures all cancers.  Maybe it costs a few extra billion dollars at the outset, but if it means we cure cancer that's great!  In reality, that's exactly what we try to do in dozens of different ways, with every type of research.  Nobody starts off hoping their drug is going to provide a three month survival benefit to 37% of the patients who take it.  They design a well-crafted therapy, based on solid science and pre-clinical evidence, then go test it out on real-world patients.  The results, too often, are survival curves that look like the one above.  If you're a doctor searching for something to offer a dying patient, or if you're a patient completely out of options, you don't get to choose a treatment like this:

Sure, you can imagine it, and everyone would like to get that.  But that's not really the choice.  The choice is between the red line and the blue line on the first graph.  Unless the toxicity is worse for the blue line, most patients would probably take that over standard therapy.

So is that it?  We just keep plugging away, spending billions of dollars every year pushing back cancer by degrees?

No.

We try to learn from each of these experiences.  Indeed, this year's ASCO theme was "Caring for every patient, learning from every patient."  So a couple of times during the conference I saw researchers do sub-set analysis of patients on the treatment arm.  They would look at some genetic biomarker and ask what survival curves for these patients looked like.  Here's an example graph of something I saw a few times during the conference:


Now we can compare, not just how patients do against standard therapy, but also whether this biomarker makes a difference.  The biomarker is usually just some genetic marker that tells us something about the underlying biology of how the patient's cancer might respond to the drug.  In this example, patients who don't have the biomarker respond at a much higher rate.  After a year we're still seeing the ~30% survival improvement between treatment and standard therapy.  However, now there's a persistent benefit even after two year, with nearly 25% more patients' disease controlled long-term.  This gets us much closer to our ideal survival graph above, and it feels like real progress.

We got there by only treating patients who were most likely to respond to therapy.  That's the point.  If you're a cancer patient, looking at the range of possible therapies available, you want to know which one(s) will work, and how well they'll work.  In other words, in the world where we can only choose among available options, it's really useful to know more about which options are worth spending your time on.

If you just got diagnosed with cancer, and you knew ahead of time that you had twelve months to live regardless of whether you came in for weekly infusions of chemotherapy, radiation, experimental therapies, etc.; you'd probably choose to forego the expensive treatments and just stick with managing your cancer symptoms.

On the other hand, if you knew that the first two treatments doctors normally use wouldn't work on you, but the third one would work phenomenally, you'd be far better off going straight to the therapy that matters, and skipping the misery of the ineffective treatments.  If you've got cancer, time is incredibly valuable.

So are the cancer drugs themselves.  Many cancer drugs cost upwards of $15,000-$20,000 per month.  If you're looking at spending that kind of money for a treatment, it's well worth spending an additional $500 to test some biomarker.  As the number of different biomarkers increases, you'll want to test more than one at a time.  Even at about $2,500-$5,000 for next-gen sequencing (NGS), it's worth spending the money to figure out which treatment you should choose.  These tests can look at hundreds of targets at once.  In the US, the decision to get something like NGS testing is often made by insurance companies or in many countries this decision is made by state-run payers.

Currently most of them are choosing not to pay for the NGS testing; patients are paying out of pocket where they can afford it.  As more biomarkers are discovered and validated, the cost-benefit analysis of paying for the NGS testing will shift in favor of these tests, providing doctors with the tools they need to make targeted decisions about which therapies will work best for their patients.  We'll spend less money on therapies that don't work, moving more quickly to treatments that provide life-saving benefits instead of wasting time on treatments that won't work.

I have one more post I'm working on in this series, which I hope will tie together this post and the previous one with some additional insights for how cancer treatment is preparing to shift dramatically in the direction of better outcomes for everyone.

Comments

Popular posts from this blog

Reverse Engineering Life

Cancer update: precision oncology comes into its own