The phrase “flatten the curve” familiarized millions of Americans with the type of epidemiological models used to estimate virus transmission, cases, and the potential death toll from COVID-19.
But those models may be less useful as the country enters a different stage of the crisis—one in which changed behaviors must be taken into account, and new policy questions about reopening need answers.
RAND media relations director Jeffrey Hiday spoke with three RAND researchers who build or work with these complex models:
- Jeanne S. Ringel is director of the Access and Delivery Program of RAND Health Care and a senior economist at RAND.
- Raffaele Vardavas is a mathematician at RAND whose expertise is primarily in constructing and analyzing epidemic models for the spreading of infectious disease.
- Carter C. Price is a senior mathematician at RAND whose work has included the COMPARE microsimulation model to study the impact of health care reform.
An edited transcript of their May 21 conversation follows.
Why is there no one model that can give us everything we need to know about COVID-19?
Carter Price: Part of that has to do with the fact that we don't know a lot about the disease—or at least we didn't three months ago. Another aspect of it gets to what questions are you trying to answer. For different policy questions, you need a different model.
Some of the models can answer multiple policy questions. But other policy questions need a unique model. So there is a plethora of models, and it takes a lot of work to figure out which one is appropriate.
Is the appropriateness based on the objectives, or the inputs, or the math underlying it?
Price: A little of all that. If you're trying to answer a specific type of question, your model should be designed to answer that type of question. And that's not, unfortunately, always the case with some of the models that we've seen.
Can you describe in broad strokes how these models work?
Price: There are two broad classes. Statistical models rely on fitting curves to past data and then using assumptions from those fitted curves about what might happen. So, given that this was the virus's trajectory in these other places, we would expect the trajectory here could be similar.
The other class are theory-based, or systems dynamics, models that try to capture how the system evolves with time based on theory. For those, you boil the system down to a couple of parameters and make some assumptions about if this happens, then that should happen. And then based on that, you'll make predictions.
How long do we have to wait to tell how accurate the model is?
Raffaele Vardavas: After a few weeks we can look to see how those projections performed. I'd say two weeks because, of course, there's about a two-week lag in seeing interventions play out in terms of hospitalizations and deaths.
Having said that, early models can suffer from big uncertainties. For instance, the estimated case fatality rate, which can be used to project the likely number of deaths, can be quite wrong early on in the epidemic. That was the case for SARS, when the fatality rate was estimated to be 4 percent, and then turned out to be adjusted to 10 percent, just as an example.
A new model from Columbia University suggests 36,000 fewer Americans might have died from the pandemic if social distancing measures had been imposed a week earlier. Is that type of backward-looking projection useful?
Vardavas: Yes, that certainly needs to be done. Maybe it's a little soon. But we should look at different stages back in time to see how things could have played out differently.
Price: I agree. It's a useful exercise to get that kind of information early on to help people think through the policy implications or the health implications of policy choices.
Another model in the news has been the University of Washington model. It had predicted 70,000 American deaths by Aug. 1, but that number was passed weeks ago. What went wrong there?
Price: There were technical flaws that led it to have some biased results. There were also some errors in the policy analysis.
Researchers need to be careful about overstating what their models can do. And policymakers need to be wary when looking at models.Share on Twitter
They were making recommendations, for example, about when to relax social distancing policies: that if you have one case per 1 million people per day, that that would be an appropriate time to relax the physical distancing policies. However, that model was not sufficiently accurate to make estimates about when that might occur.
Researchers need to be careful about overstating what their models can do. And policymakers need to be wary when looking at models. And the media, when they're covering these things, they need to ask around to make sure that other experts agree that this model is appropriate.
You mentioned one new case per million people per day, which didn't work in this case. Has some alternative rule of thumb been widely adopted?
Price: There are lots of rules of thumb. The administration has suggested one involving two consecutive weeks of falling case rates. That's somewhat grounded in analysis, though it's not what I would go with. Just because you have a small decline over two weeks, if you still have a lot of new cases out there, when you open things up, you're going to get a lot of spread.
Why do we have the types of models we have had so far during this pandemic? Has it all been driven by speed and urgency? Or data availability?
Price: Yes, some models are easier to roll out than others and some rely on either more or different data.
The best models being used now, in my opinion, are the system dynamics models that are using a variant of the susceptible-infected-recovered structure. That class of model is very useful for policy analysis because it has a couple of parameters that translate into interventions.
So a policy could reduce the interactions people have, which reduces the spread. That's a model structure that's very well suited for policy analysis at this stage of the epidemic. There are other policies that this type of model isn't well suited for, however, and then we'll need other classes of models to assess those policy interventions.
Have we managed to avoid the worst-case scenario in the U.S.—and does that mean we'll need a different kind of model going forward?
Price: It seems so. The U.S. has not reached the level of spread and the level of deaths that the models indicated might have happened. We didn't have an optimal outcome—but it's far from as bad as it could have been. Now that we are starting to relax physical distancing, we'll need a new set of models to explore the implications of that, and to explore what policies might work best in this environment.
We have a new model at RAND. Is it well positioned for this next phase of the pandemic?
Jeanne Ringel: RAND developed a decision support tool for state policymakers. It combines information from two models—an epidemiological model and an economic model—and adds a qualitative policy analysis to assess the effects of different combinations of social distancing policies, such as closing schools, closing businesses, and stay-at-home orders. As we discussed, there are a lot of models out there, but ours is unique in a couple of ways.
First, we allow the user to choose between different levels of social distancing. Most of the models that we'd seen took social distancing as a one lump sum set of policies, and estimated the effect of loosening restrictions by just saying it would increase person-to-person contact by X percent. We use a more sophisticated method that looks at differences in where people mix and how it changes under these different social distancing restrictions. So it gives a more nuanced analysis.
We also let the user adjust the timing so that they can really look at the effect of keeping restrictions in place longer or removing them sooner, and so forth.
The final thing is that it combines this epidemiological model focused on the health outcomes with an economic model that looks at economic outcomes. Putting those things together in one tool—where you set the assumptions about what the social distancing policies are and how long they'll be in effect—allows you to consider a wider range of policy impacts and potentially think through the trade-offs between them.
It's also state-by-state. What drove the decision to do it that way as opposed to a national overview?
Ringel: We thought that most of the decisionmaking was going to be happening at the state and local level. So we wanted to build a useful model that takes into account a state's population density or age distribution or what exactly is going on with the pandemic in that state at that time.
We also wanted to look at the local level, but unfortunately the data are not consistently available at the county or city level in order to feed into the epidemiological model.
Can you draw a national projection from RAND's tool?
Ringel: Yes, I think you can aggregate up from the state level. But what you see across all of the states is a similar overarching pattern: As you relax restrictions and allow more people to interact, we expect to see the number of cases and deaths to increase.
Keeping restrictions in place, even a few weeks longer, can potentially reduce the size of that increase—but it sort of just pushes the problem further into the future. Whenever we relax restrictions, we might expect to see these increases. So what's really important is that we make good use of the time while restrictions are in place to develop ways to interact more safely.
For instance, businesses have figured out ways we can interact more safely—adding a sneeze guard at the grocery store checkout, or sanitizing between customers at the barbershop. All of those things have been happening while the restrictions are in place.
Our model doesn't yet capture the effect of those changes because they're not in the data. What we currently do is model “a new normal,” a level of activity that's basically an average between pre-pandemic interaction and level of activity we would see under the restriction.
As we get more data about the way people are behaving and what impact that has, we'll be better able to project increases in cases. It may not be as bad as what the projections look like right now because we just haven't been able to incorporate those behavior changes yet.
The RAND tool seems to show that in most states things are going to get worse late in the summer. Is that a fair conclusion to draw? And will our model eventually go out beyond Sept. 1?
Ringel: On the first question, yes, that's exactly what we see as social distancing is relaxed and when people get back out into the economy and start interacting again. But as I was just saying, there are a variety of reasons why what the tool is currently projecting may be a bit higher than what we'll ultimately see because the behavior changes just aren't showing up in the data yet.
We bring in new data every day and we rerun the model every couple of days to generate new information. We're also going to incorporate improvements as we learn more. But as you'll see if you take a look at the tool, the error bands around the projections get very wide the further out you get toward September. That just reflects growing uncertainty.
What additional elements might be incorporated into the model?
Vardavas: One of them is seasonality. Currently our model is projecting that things are going to get worse if we open up in this way, but we don't really know about the effects of the time of the year. At the moment the model assumes no seasonal effect, but we can change that to assume some seasonal effects perhaps similar to those for influenza.
We also might want to consider a new feature, such as the loss of immunity. What would the dynamics look like depending on the duration of immunity for people who have been infected? And the behavioral changes are a very important aspect because, as Jeanne mentioned, the level of mixing might indeed be quite reduced.
We could also add in statistical models within the transmission model to make projections based on how people might adapt to the new case fatalities or case counts. And of course there's mask-wearing. These are some features that we're thinking about.
One limitation of our model is that we are currently not considering the travel patterns between the states. I think the mixing between states is important at the beginning of the epidemic and at the end, not so much right now. So it's fair to have excluded it at this stage, but this is going to become an important factor later on.
Given all these different models, approaches and fine-tunings, what do you see in terms of the projections—are they converging or diverging at this point?
Price: We're reaching a point where a lot of the models that were useful in the early phase are no longer going to be useful.
In the first phase you really need to focus on the epidemiology because you're trying to avoid a massive spread and a lot of death. We've mostly tackled that and now we can focus on other aspects.
So you need both the epidemiological side and economics, social issues, and other things. There are a lot of economic models, but we just don't have many that simulate other very important aspects of what we do next in this crisis. So there's not really agreement, because there's not really a lot out there.
There are some specific populations—like nursing homes and prisons—that have been hit very hard by COVID-19. Do those vulnerabilities get worked into models?
Price: Where you have an isolated group or a tightly contained group, the dynamics there are going to be different than the dynamics in the population as a whole. And so you'll need a different model for that—particularly because COVID running rampant in a nursing home causes lots of deaths and lots of cases in a very tight period of time. Models that treat everybody as fairly homogenous are not going to capture that.
Vardavas: Right. The population-level models we've constructed are not suitable for modeling specific settings like a nursing home or prison or schools. If we wanted to look at two policies—say cutting down class sizes in schools, or having different students come to school on different days—individual-level models such as microsimulation models and agent-based models could be used for that.
What are some lessons that policymakers and modelers perhaps could be gleaning from all the data that's being gathered now?
Ringel: Some of the more complicated models, like those that account for individual behavior, take much more time and data to build. Some may be ready in time to inform our current response, but I think that they may be most useful helping us to prepare for the next pandemic.
All of the data we're collecting now and all of the information on policy changes certainly has uses now, but it's going to have use for a long time to come.Share on Twitter
They could address the what-if questions: What if we'd left schools open longer? What if we'd had better testing capacity? With data from the entire arc of the pandemic, we'll be able to play out those scenarios.
There are also natural experiments taking place all over the country and around the world, as different jurisdictions use different strategies. This will provide a rich set of data that will allow researchers to try to tease out the effects of different policies on disease outcomes.
So I think all of the data we're collecting now and all of the information on policy changes certainly has uses now, but it's going to have use for a long time to come.
What advice can we give to policymakers now about which models they should be looking at most closely or what else they should be doing to get to the best policy decisions?
Price: At this point it's the modelers' job to come up with the next relevant analysis. But policymakers should look at lots of different models, because different modelers are making different choices.
If a model is highly sensitive to one policy input or something like that, you need to be aware of that. I also encourage policymakers to be wary. Make sure that you can interpret the model yourself, or bring on people who can help.
As we've seen, not all models are created equal—and some in particular have made poor recommendations based on bad analysis.