Accurate forecasts of extreme weather events such as hurricanes are critical in helping communities to prepare and respond as effectively as possible. So when scientists predict extreme weather that never materializes, lay people tend to wonder what went wrong.
This is a natural tendency that is not tied to a failure of the science, but rather to differences in the way scientists and lay people view predictions about extreme events, such as Hurricane Sandy a year ago. While forecasters are, by definition, married to their science, they also need to be mindful of the human factors that help determine how useful their work is in protecting people.
Early this summer, meteorologists made dire predictions for the 2013 Atlantic hurricane season: 18 named storms, including nine hurricanes. By contrast, the average season, which runs June 1 to Nov. 30, usually has 12 storms, including six hurricanes. Even just prior to the midpoint of the season, the same meteorologists were forecasting “an above-average probability of United States and Caribbean major hurricane landfall.”
When midseason rolled by without a single hurricane, a handful of media stories noted the nonstory. All voiced the question on many people's minds when a dire forecast like this bites the dust: What's going on with hurricanes these days? And what's going on with scientists? Do they know anything for sure? Some reports tried to explain why there had been no hurricanes (natural variability), while others explained why scientists are becoming less certain (it's more complicated than expected).
What is notable in these discussions is the mismatch between scientific and lay mental models of the potential usefulness of climate information.
Scientists place value in probabilistic forecasts. Indeed, probabilities are useful for decision makers trained in allocating resources according to estimates of likelihood (e.g., disaster response agencies can implement staffing plans to ensure they have adequate personnel to meet anticipated demands).
Thus, refining scientific estimates of the probabilities of the frequency and intensity of extreme climate events is certainly important.
Lay people, however, tend to be less interested in probabilities and more interested in outcomes.
One reason relates to the relative difficulty of imagining a probability and the relative ease of “feeling” an outcome. People tend to overestimate the probability of rare events and underestimate the probability of common events. In contrast, we can easily conjure images of the impacts of a dramatic event, such as a hurricane making landfall, compared with the less vivid experience of a partly cloudy day.
Lay people often find simple rules of thumb more useful than probabilities because they help us process complex information efficiently, and often we arrive at a good plan of action. Sometimes, of course, this process goes awry and preventable harm is not avoided.
This explains why a majority of Americans may say they have experienced an extreme event or natural disaster recently, but only about one-third have a disaster response plan or emergency kit in their homes.
Another human tendency is to invoke superstitious thinking to protect us from harm. The traditional “knocking on wood” is based on the belief that we can avoid tempting fate after making a favorable observation (such as a quiet hurricane season).
This makes us feel better and can boost confidence, for instance, in being able to prepare for disasters, which in turn motivates action and can result in behaviors that keep us safe.
When generating climate information, therefore, we need to examine empirically how useful it is going to be.
No doubt, we need to improve forecasters' skills and nonscientists' understanding of the complexities of the relationship between predicted climate events and outcomes.
That said, we also need to improve the usefulness of information to take advantage of what scientists know to help save lives.
Melissa L. Finucane and Regina Shih are senior behavioral and social scientists at the nonprofit, nonpartisan RAND Corporation.
This commentary originally appeared in The Star-Ledger on October 8, 2013. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.