Making the Experiment Work

Once, back in grad school, I was talking to my thesis adviser about a particularly difficult experiment I was trying to figure out how to do. I was coming off an experiment run that took about two weeks from the start until I got the results back for analysis:

"It didn't work." I told her.
"What do you mean, 'It didn't work'?" She asked.
"Something about the way the experiment was run was wrong. I got no signal, not even for the standard curve and the controls. It didn't work."
"The only failed experiment is one you don't learn anything from." She chided me. "If you can't use this experience to figure out why it didn't work - or even to learn how you might figure that out with the next experiment - then you're just wasting time and money, hoping to learn something by random chance."

This is a powerful lesson, and one I want to expand on more generally. It's a lesson that should influence how we view the past, as well as how we see the current situation with COVID-19. Because soon (not soon enough, of course) we'll all be talking about the current crisis in the past tense. That's when all the analyses after the fact will come out, along with the second-guessing and whatnot. Ideally, the result of that process will be that we come to the next pandemic much better prepared, with fewer fatalities, and fewer negative effects from society as a whole. That's where the idea of this post comes in.

First, let's take a detour through a field that is outside my area of expertise: economics. Now, economics suffers from many of the same problems that epidemiology faces, namely that it's often unethical to conduct controlled experiments to test most of the hypotheses it generates. Even so, it's not useless, just because we can't employ the full scientific method in the way we're used to. We can learn things, we just need to be careful about how we go about it, and our confidence in the ability of the results to predict similar phenomena in the future is not nearly as high as it would be if we could do controlled experiments. Instead, we'll have to make do with things like looking at trend line data, hypothetical future projections based on past experience, and A-B testing. None of this is great, but it should help us update our prior biases, and at the very least to reject ideas that didn't work.

In the field of economics, there have been a large number of economic experiments like this, where something major is attempted in the face of a crisis situation and economists project that a certain measure - if adopted - will produce a certain outcome. Yet after the crisis the intervention sometimes produces a consensus and sometimes it does not. For example, after the gas shortages of the 70's people widely agreed that price fixing is likely to cause shortages.

Some interventions don't resolve the debate. One of my favorite podcasts is a popular economics podcast called EconTalk. I think I've been listening to them since 2007, and I caught all the back episodes early on. There's some great stuff in there, and some that's disappointing, but they've put out over a decade of content, so that's to be expected. Few podcasts can boast guest from multiple bestselling authors, Nobel prize winners, founders of websites like AirBNB and LinkedIn, etc.

During and after the 2008 crisis, the podcast devoted a number of episodes to the topic du jour, trying to figure out what was happening, and afterward what had happened. The host has some strong biases in the free market direction, but to his credit he had guests on who have strong biases in the belief that intervention was necessary and helpful in overcoming the crisis. In a few episodes, the host and his guest would debate the evidence for/against the economic stimulus, TARP, QE 1-3, etc. The guest would point at a number of studies that clearly demonstrated the efforts averted disaster. The host would point out methodological flaws in those studies that made those results unreliable. He would then point to his own studies and be refuted in the same way. On a few occasions they agreed that the true result was unknowable based on the available evidence.

As an outsider observing this debate, I can't help but remember the admonition from my thesis adviser: the only experiment that's truly a failure is the one you don't learn anything from. What I saw were two sides who continued to believe the same things after the intervention that they believed before it - and for the same reasons. The experiment didn't provide any usable evidence.

One of the episodes from the EconTalk podcast was about how the Federal Reserve reacted to the stimulus package. Apparently they saw a lot of money flowing into the market - as a result of the stimulus - and got scared that it would cause inflation. In response, they undertook anti-inflationary measures to counteract the stimulus and pull money out of circulation by paying banks interest if they kept excess reserves. Given that the hypothesis they were testing required an artificial injection of additional money into the economy to get things moving again, this program of interest on reserves directly undermined the whole experiment.

Now, I'm not an economist, and I have no expertise in the 2008 financial crisis and subsequent remediation efforts. There are a lot of players, and many complications involved. My point isn't to do a postmortem on the whole affair, but to point out that while large amounts of resources were being put into testing a hypothesis, additional resources were being put into undermining that test. Not only that, but the stimulus itself was put together with extreme haste, for understandable reasons, and therefore not enough thought was put into collecting meaningful data to show after the fact whether the experiment was a success. These complications combined to ensure that no matter what happened - no matter whether this type of intervention was good or bad - we wouldn't know the answer after the fact. In other words, the experiment failed. If you think it's a good idea to use stimulus to avert economic disaster, the data from the 2008 crisis response didn't help build a consensus to support your view. If you think economic stimulus is really bad idea, the data from the 2008 crisis response doesn't help demonstrate that stimulus doesn't work.

In short, we spent a lot of money in the heat of the moment on this idea. Whatever you think about whether that money was well spent or wasted, one thing we didn't do was hand down knowledge to future generations who will surely face a similar crisis. In that sense, the experiment failed. In the aftermath, all we know is that we'll have to run the experiment again - and write off the previous attempt as a wasted opportunity.

This brings me to the current crisis, and no I'm not talking about the recently passed stimulus/relief bill. I'm talking about the social distancing/shelter-in-place/quarantine policies that are being implemented around the world. I have maintained from the beginning that a situation as rapidly evolving as this one carries with it a high degree of uncertainty. People want information that's not only unknown, but also unknowable at this stage of the pandemic. Some of that uncertainty will likely be resolved after the crisis of the current moment, when we have time to gather evidence and discuss it more calmly. Some of it will never be resolved. However, I know a lot of people who have developed a high degree of confidence anyway. This confidence runs in two different directions:

One group believes that we're in the middle of exponential growth in the spread of a disease that will potentially kill millions of people worldwide. That this isn't just something that will catch a few 80+ year old people in nursing homes who were already on death's door, but that a large mass of otherwise-healthy people could find themselves going to overcrowded hospitals that are too overcrowded to treat them. That who lives and who dies may be based on the roll of the dice if we don't control or stop the spread of this disease. Without strict adherence to certain measures, the outcome is going to be far worse that a little economic pain in the short term. People who don't take social distancing seriously are literally spreading death around them and should be treated as such.

Another group believes that we're in the middle of mass hysteria, sacrificing the livelihoods of millions of people for a theoretical crisis that's not real. They hypothesize that the disease wouldn't spread to a sizable percentage of the population with normal social interactions anyway, that the rate of morbidity/mortality is overstated, that health care resources won't be overwhelmed in places that aren't Lombardy Italy, that even if they are the difference between intervention and non-intervention isn't that large, that the costs have been disproportionate to the lives that could be saved, and probably a few additional hypotheses that will pop up in the next week or two. This group believes that the social distancing program is a criminal destruction of people's lives, and that it's all for nothing.

I want to speak to this second group for a minute. If you believe that social distancing and shelter-in-place is a horrible way to deal with the current situation I would like to make the case that your best response to this crisis is to practice assiduous social distancing and hand-washing. Not only that, but your best bet is to encourage everyone around you to do the same, and with the same fervor. Whether you like it or not, this is the experiment we're doing today. Nothing you do will save people today from experiencing the full effects of social distancing and shelter-in-place. If you think this is the wrong thing to do, your most meaningful response will be to make sure the experiment is as resounding a success as possible. And by 'success' I don't mean, 'did what we thought it would do'. I mean 'we can look at the results and learn something from them'. Make sure enough people adhere to the experiment that there's no way to look at the post-hoc analysis and say, "just think of what would have happened if we got buy-in from everybody!" If you're right, and this experiment is all for naught, the results will bear you out in the end. You'll spare countless billions in the future from a similar fate. And if you're wrong, you'll save lives today.

Years from now, we will look back on the current moment and have a lot to say about what happened. The worst possible outcome would be if we look back to what happened today and have to say, "the experiment failed, we'll have to run it again to learn anything from it."

Comments

  1. I like this approach. That's a good way to think about it.

    I've had similar thoughts in the realm of politics. (I confess I was not wise enough to realize that it also applied in the way you describe.) So often one party will want to do X, and in the process of getting it past the other party and all the lobbyists, etc. etc. It will be a very warped version of X, and when it doesn't work, they always say well it was because of all the changes we had to make, had we been able to implement it without the interference of the other party it would have worked. And part of me thinks okay, let's do it exactly the way you want so at least when it's over the data we have is on the system that was actually proposed, not some bastardization of it.

    ReplyDelete
    Replies
    1. My thoughts exactly. I have my own political views and they don't tend to align with any of the major movements out there (including Democrat, Republican, Libertarian, Socialist, Classical Liberal, etc.), which means few of my preferred policies have a prayer of ever getting considered. That said, even if I disagree with a policy I'm not opposed to it getting passed so much as I'm opposed to it passing in a shoddy way. You want to privatize prisons, set a $20 minimum wage, expand the death penalty, or whatever? Fine, but in exchange I want solid metrics, a comparator arm, preregistered benchmarks against which the program will be deemed to be a success or failure, and a sunset date when we review the evidence and determine whether to continue the experiment or close it down for futility - based on those preregistered benchmarks!

      I think I'd vote for a policy I disagree with that had proper controls in place for determining efficacy before I'd vote for a poorly controlled policy that I agree with. It would be better to finally put the question to rest than to keep up this game of guess-and-never-check.

      Delete

Post a Comment

Popular posts from this blog

Nutrition Graph: Cereal Comparisons

Covid-19: Epidemiology is useful

Reverse Engineering Life