A Scoping Literature Review on Indicators and Metrics for Assessing Racial Equity in Disaster Preparation, Response, and Recovery
Jul 15, 2021
In this webinar, senior social scientist Melissa Finucane reviews the complex nature of social equity—including contextual, procedural, and distributional equity dimensions—and how a robust, evidence-based approach is needed to measure progress toward equity in disaster contexts. Challenges in identifying underserved groups and selecting appropriate equity metrics are discussed in the context of current policy initiatives to advance racial equity.
This November 2021 event was presented by the Disaster Research and Analysis Program (DRAP) of the RAND Homeland Security Operational Analysis Center (HSOAC). The DRAP webinar series was created to increase understanding in how disaster policies can affect the ability of communities to respond to and recover from disasters.
Thank you all for joining us today, and welcome to the first of a series of webinars that HSOAC's Disaster Research and Analysis Program will be hosting. The program was established to provide research and analysis to DHS leadership to inform disaster management policies for resilience, mitigation, planning, preparedness, response, and recovery.
I'm Jessie Riposo, and I'm the program director for the Disaster Research and Analysis Program, and I will be moderating today's session. You can post questions in the question box, and I will do my best to ask them over the course of the presentation.
Our speaker today is Dr. Melissa Finucane. She is a senior social scientist at RAND. Her research focuses on environmental and health risks such as climate change impacts, urbanization, freshwater management, oil spills, and infectious disease outbreaks. She is also the codirector of the Climate Research Center at RAND and a senior fellow at the East-West Center in Honolulu, Hawaii. Today, she will be talking about the complex nature of social equity and what is needed to measure progress towards equity in disaster contexts. Melissa, over to you.
Great. Thank you so much, Jessie. It's a real honor for me to have this opportunity, and I'm grateful for that introduction, but also just to have this opportunity to share with you what we know about evidence-based or data-informed approaches to assessing social equities and inequities in the context of disaster assistance programs.
So, I want to start by acknowledging that debate around how to equitably allocate taxpayer dollars is not a new question for federal or other public departments and agencies. But there's certainly increasing interest in and commitment to reducing inequities. And there's now, more than ever, a need for defensible frameworks and indicators and metrics for tracking progress toward equitable outcomes. So this talk is really aimed at helping policymakers and practitioners in particular understand what equity is, why it's important, and how we can measure it robustly.
So spoiler alert, before we dive into the details, here's my bottom line up front in case anyone has only five minutes and you want to drop off early. Here are the key takeaways.
When thinking about assessing equity, we really need to keep in mind that equity is a complex, multidimensional construct, and it means different things to different people. But there are some essential elements that I'll describe in just a moment.
We also need to keep in mind that, as we frame the development of equity standards and develop equity evaluation methods, we need to think of it as a dynamic process rather than a static one. And so what I mean by that is that we need a process that allows for adaptive adjustments to the standards as the research and data catch up to what policymakers and practitioners are trying to do from a programmatic perspective. We also need to be forward-looking, and in other words, think about how the equity standard or methods that we're developing will be meaningful under future conditions that don't necessarily look like current or past conditions.
And then thirdly, we need to develop a systematic and robust approach to measure progress toward equity. And that includes doing the hard work of establishing an action-logic model, considering the completeness and balance of indicators and metrics derived from that logic model, clarifying how we plan to identify our target audiences, and clarifying the purpose for which our equity standards or methods are intended to be used.
And then last, and certainly not least, it's important to keep in mind that we need to partner with key stakeholders, which includes impacted communities, to determine both the reasons and the methods for measuring equity performance.
So as you no doubt know, technical risk assessments have been developed and refined over several decades now, bringing the latest insights and methods from engineering and other physical sciences. So this has been terrific in bringing a lot of the latest empirical approaches to measurement of risk and hazard. Unfortunately, however, the result has been that most risk assessment approaches focus on damages to property assets, and so have ended up overweighting wealthy individuals or areas when determining vulnerability and need. So in short, you have to have significant assets to be counted in this process, and as a result, these approaches tend to overlook poor people and poor communities.
But a large literature has developed more recently, highlighting the important role of sociodemographic factors that affect disaster preparedness, response, and recovery. So there's an emerging interest in mechanisms or tools, such as differential cost-sharing, that could offset the wealth bias that's been baked into these traditional risk assessment and benefit-cost analysis methods. So consequently, what we're doing now is trying to figure out equity standards or outcomes and evaluation processes to determine both what the baseline conditions are and what our goals are and how to get there in disaster assistance provided by the federal government or others.
So just as a quick primer about equity and equity concepts, there are multiple approaches to defining equity. And here I'm going to use McDermott and colleagues' 2013 framework for equity analysis that they developed in the context of ecosystem service markets. So McDermott and colleagues define equity by evaluating the change in the relative situation of particular groups in society. So while you might think of justice as about rights, equity is really a comparative construct. So it's principally concerned with relationships between people and their relative circumstances.
There are three main dimensions that form the content, or the "what," of equity. And together, these dimensions delimit and characterize the subject matter of equity as explicitly or implicitly defined in policies or projects or plans.
So the first dimension is what we call "contextual equity," and these are the historical forces that create an uneven playing field for applicants. So it's the extent to which preexisting political or socioeconomic conditions limit or enable people's capacity to engage in and benefit from resource distributions.
The second dimension is what we call "procedural equity," which reflects the decisionmaking processes or rules, and the extent to which the process recognizes different groups to ensure their inclusion or representation.
And then the third dimension is called "distributional equity," which refers to how costs, risks, and benefits are allocated or distributed across society. And there are several different principles of distributive justice. For instance, there's equality, which targets an equal distribution of costs and benefits for all community members. There's the principle of social welfare, where we maximize net social benefits and share profits across community members. There's the merit principle, which rewards disproportionate input or effort or accomplishment. And then there's the need principle of distribution, where benefits specifically are designed to improve the welfare of the least advantaged or the most marginalized community members.
So those three dimensions form the content, or the "what," of equity. And how these dimensions are shaped depends a lot on other questions that frame the whole equity problem, if you will. So one of these questions is: Who counts as a subject of equity? Who is the target group, or at what scale are we talking about? There's also questions about why are we interested in equity, or not? Are we trying to just do no harm? Ensure no one is made worse off? Or are we actually trying to advance equity, that is, move towards a situation where there's more equity across groups? How the parameters—the what, the who, and the why of equity—are determined will have a big influence on what is measured and concluded about equity progress and equity outcomes.
So I just want a pause for a second and be clear here that social vulnerability, which we've been hearing a lot about in different circles of debate, communication, and public discourse, is very different from equity. "Social vulnerability" reflects combinations of socioeconomic and other processes and conditions that shape people's ability to withstand and recover from stressors and shocks, or disasters.
"Equity," to recap—if you recall, I said it's a comparative construct. So it's characterized, in particular, by broad perceptions of fairness across disparate groups. And it allows for the unequal distribution of benefits and costs. And ultimately, the goal of equity is fair access to livelihood, education, and other resources such that race and gender, or other demographic characteristics, are no longer a factor in the assessment of merit or in the distribution of opportunity. So in the context of extreme events, "equity" means that sociodemographic characteristics such as race, or gender, or age, or anything else, predict the distribution of disaster aid to the extent that these things are related to the need for aid.
So for assessing equity performance, that is, whether a grant or program or other activities increase or decrease equity over time, we need an evaluation model to structure these assessments of where the highest areas of need are targeted adequately by the funding. So I think of this like a road map—it's technically called an action-logic model—but it's very helpful for highlighting how interdependent program elements lead to expected outcomes. Or, in other words, what steps are being taken to achieve the long-term goal of equitably reducing risk?
The starting point is really the context in which equity is being assessed. So this is the legislative mandates, executive orders, or other guidelines that are available that inform the inputs—for instance, the amount of resources or the target that a particular program is focused on. We need to understand the inputs, the resources that we have to build the capacities toward equity, in the long term.
But really, a lot of our work focuses on outputs, that is, the activities that are conducted by a program, as well as the participation of the specific equity targets, or target audiences, if you will. And the outputs essentially describe what a program or a plan or an activity does, with the resources available through the inputs, to direct the course of change for particular groups.
And then ultimately, impacts—or outcomes and impacts—are often collapsed into one bucket or split out into separate buckets that would be expected to be achieved in the short and the medium and the long term.
And then always as part of action-logic models, we have certain assumptions and external factors that also need to be measured, but I won't be talking about those today. We're really just focused on the guts of the logic model, which is the central points of, or buckets of, inputs, outputs, outcomes, and impacts.
So just to put some meat on the bones, I've inserted some examples here from work that I'm familiar with to illustrate each of these buckets in the logic model.
So inputs, for example, include federal funds allocated to a program, or a research and assessment base that already exists, or other external technical assistance that a program might want to draw upon.
Then outputs include the things that a program does. So, for instance, when a notice of funding opportunity is released, it specifies, for instance, technical or qualitative criteria, such as the different levels of federal cost share for different types of activities, or different types of beneficiaries. A program may also specify that it's particularly interested in supporting projects that are aimed at different areas defined in certain ways. They may define economically disadvantaged rural communities, for instance, as a particularly important group toward which resources want to be allocated. So in the BRIC program, Building Resilient Infrastructure and Communities, for instance, in year one, they actually use the term impoverished communities. But now the term is economically disadvantaged rural communities. And this identifies areas where there's a population of three thousand or less and a per capita income not exceeding 80 percent of the national per capita income. So really, they're using population and income to define areas of need.
And in terms of equity outcomes, we've already talked a little bit about different dimensions of equity potentially being affected. So for our example of a higher federal cost share going to economically disadvantaged rural communities, the expectation is that we would achieve distributional equity in that a higher proportion of the funding would go to communities with less resources. However, it's possible to pose a procedural barrier for other communities, for instance, that don't meet these EDRC criteria but still have less access to cash or other in-kind resources needed for their share. For instance, poor urban areas.
And then finally, in terms of impacts, I think there's a lot of work to be done here in terms of understanding and defining and measuring what it means to achieve an equitable risk reduction. I've inserted some suggestions here about reducing susceptibility and improving coping and adaptive capacity. But there's certainly more that we need to do to build out this bucket.
So let's turn now to a question that seems to be on everybody's minds, and that is: Equity for whom? This has become increasingly topical since policymakers are really desperately trying to figure out now how to identify who they want to target. Especially initiatives such as the Justice40 Initiative, established through one of Biden's executive orders, asks us to—well, asks federal governments, federal government agencies or departments—to identify underserved, vulnerable, or other variously-termed communities that should be targeted with the resources they have and the activities that they're conducting. So various terms have been used historically, such as disadvantaged or underserved or vulnerable. But exactly what people have meant by these terms varies. So let's talk a little bit more about potential target groups and how they have been or could be identified.
So historically, vulnerable groups have been defined in multiple different ways. I mentioned already the terms economically disadvantaged rural communities, or formerly called small impoverished communities, which emphasize population and income. But other agencies and programs have focused on other definitions. For instance, trying to address areas with high poverty; or unemployment, as would be targeted in opportunity zones; or in terms of population density, so this is the urban-rural divide; or housing overcrowding; or incidence of disease or adverse health conditions.
And more recently, disaster and social vulnerability researchers have really led the charge for expanding these variables, or factors, that are being considered in identifying vulnerable or underserved areas and communities. They've really introduced other sociodemographic variables that has gotten to be a very big set recently. So just, for instance, including people with disabilities, certainly talking about race and ethnicity, age of people, language competency, home ownership, and so on. The set has expanded exponentially.
And this has led to efforts to try and gauge vulnerability and resilience using indices that simplify the use of these many metrics that are available, for instance, through the U.S. Census. So one example index is the EPA's EJSCREEN, which includes a demographic index. And this is one of the simplest ones I've been able to find.
The EPA's EJSCREEN is really designed as a pre-decisional screening tool for understanding where the effects of existing pollution might be greatest. So what they do is combine—in the demographic index part of this screening tool—they combine two metrics that they think are good proxies for communities' health status and their potential susceptibility to pollution. And these two variables are percent minority and percent low-income, which are drawn directly from the American Community Survey at the census tract level. So, these two variables were explicitly named in Executive Order 12898 during the Clinton administration. And they correlate with vulnerability and susceptibility. And so the EPA makes the case for simply taking an average of these two indicators.
And I think this is a good place to start, because these are very general indicators of a community's potential susceptibility, in this case to impact from pollution exposures, at a block-group level. These variables are correlated with factors related to increased susceptibility, such as poor health status or reduced access to health care. And they ignore the lack of resources of language skills or education that would help people to avoid exposures or to obtain treatment. They're also used by other federal programs, and we'll talk a little bit about the extension or inclusion of other variables in these efforts.
But then finally, let me say that the use of the two variables in the demographic part of the EJSCREEN tool really avoids the challenges of more complex indices such as social vulnerability, either Cutter's Social Vulnerability—SoVI–Index, or the SVI, which is also the Social Vulnerability Index, proposed by the CDC.
So you're probably, some of you are probably familiar with these indices, and I think they're important. They have shared characteristics in that they all represent latent constructs and they're not directly observable. But they also have some limitations. So they're used very widely by researchers in government and non-government organizations when comparing different geographic units in terms of their relative levels of vulnerability. But I want to add a strong note of caution, because none of these indices has been definitively validated. And I'll point you to two recent papers that are cited there at the bottom of this slide.
One is by Spielman and colleagues. That has suggested that SoVI and, potentially, other indices have problems with internal consistency. So that is, they lack reliable measurement. So, for instance, in this paper, they show how the index values at the county level change in response to expanding the amount of data fed into the index. So this creates a real practical problem for a policymaker if they're trying to identify a vulnerable community to target in their disaster program, but in one analysis, they see a county-level community appears resilient, but in another analysis, the index values suggest that they're vulnerable.
SoVI also tends to lack a theoretical consistent, or what we call construct validity. For instance, in their analyses, Spielman and colleagues found that increasing unemployment is related to lower values of SoVI, but if you think about it should be the other way around. And indeed, some aspects, such as percent minority, are very context dependent, such that the same value in a different place might have a different meaning. So overall, all that to say, caution against using vulnerability indices in policymaking is advised at this time due to some of these inconsistencies and a need for a lot more research to really understand how they play out in the real world.
So ultimately, policymakers are interested in capturing whether their program had the intended impacts. And this is where I think research and practice face the biggest challenge in determining whether inequity gaps have closed. That is, is anyone really better off? And in what ways? So this is where I've suggested that we should be aiming to look at reduced pre-event susceptibility. We're trying to measure things like coping capacity, that is, an individual or group's use of available resources and opportunities to absorb impacts of an event or manage needs and overcome the immediate and short-term effects of a hazard-related loss. Or improve adaptive capacity, that is, the long-term adjustments to acclimate to new norms and reduce susceptibility and ultimately increase coping capacity.
Yes, go ahead.
We have a question here, I think related to the topic you just completed. The question is: Any thoughts on the Census Community Resilience Estimates? Is it internally or theoretically consistent?
Actually, I don't know. I don't think that work has been done yet. I think generally, research has to catch up with how these indices and values are being used or planned to be used.
All right, so in terms of indicators and metrics of equity outcomes, there are myriad approaches. Some focus on health, others on education or transport or financial outcomes. Others have focused primarily, or even exclusively, on built or physical infrastructure outcomes, or outcomes related to the natural environment, and to a much lesser extent the ecosystem services provided by natural environments. But to summarize the main findings of our recent review, cited there on the bottom of the slide, there's really no one approach currently available that you can take off the shelf and use in a disaster context. And so here I just summarize, you can read the whole report for the details, but the key challenges are as follows.
Firstly, indicators and metrics are often not specific to racial equity, which, you know, is certainly of particular interest. Different programs or organizations have different types of equity they're interested in. And with the increased focus on racial equity in particular, it's really highlighted a need for more disaggregated data or analytic techniques to measure differences by race.
A second challenge is that the different topics that are of interest to different programs—so health, education, transport, etc.; or even electricity, food, and you name it—the different sectors or programs will have a different focus. These are addressed at different geographic scales. So when we try and build up toward an overall equity index across all these different sectors, different sets of indicators will have been combined in different ways. So it's not possible to compare an index made by one group or in one area or one sector with others.
A third challenge is that oftentimes the indicators are developed for theoretical purposes, and they haven't been used in real-world contexts. So this would be in disaster or non-disaster real-world context. But of those indices and metrics that have been used in real-world context, they're typically not developed for disaster situations.
Another challenge is that the selection of indicators requires tradeoffs. So balancing things like the validity or the reliability or the timeliness of the data and the measures available, or the utility of different spatial scales.
Another challenge is that there are rarely criteria specified, transparently at least, for selecting the data for different indicators or measures. And this is important for providing context about the intended purpose for using the selected data and indicators so that we know we're comparing apples and apples across different communities or geographical areas.
And then, finally, the appropriate measures of baseline conditions have often not been identified. So if we're talking about change in situation, moving toward more equitable outcomes, we need to know where we're starting, and those baseline conditions are really important to capture.
So ultimately—sorry, I should do a time check here, Jessie—ultimately, equity indicators and metrics, I think, should be chosen based on the objective of the decisionmaker. And certainly, things like asset loss would be an appropriate metric for an insurance company. But well-being loss, I would argue, is more effective for considering policies that aim to focus on protecting people in vulnerable situations from a more holistic perspective. The choice of equity metrics may differ depending on the reason for measuring performance. And let me just run through a few different reasons that might be motivating the interest in equity or the development of an identification of different equity indicators and metrics.
So, for instance, one reason for measuring equity performance might be that the communication or engagement with stakeholders is the primary interest. So in this case, the focus is really on inviting discussion, generating buy-in, and trying to reach a shared vision. And in this case, equity indicators would likely need to cut across different sectoral priorities.
When the purpose for measuring equity is primarily deliberative planning or decisionmaking, then I think the focus is more on goal setting, or on the mechanisms to achieve the goals, or agency alignment. And then the equity indicators in this case would be to measure changes in the drivers of inequities.
Another situation is where the purpose of the equity performance measurement or assessment is to justify investments. And the focus in this case is really on how investments would be spent and to what effect. And in this case, equity indicators need to capture different economic benefits, but also the broader non-economic benefits.
When the purpose is for measuring accountability or improving accountability and governance, the focus is on explaining efforts or accomplishments and progress toward goals. And in this case, I think we need, certainly need baseline measures for, and disaggregated data along, different sociodemographic cleavages. And unfortunately, this will rule out a lot of different indicators or measures simply because the data are not available.
And then finally, and this one's my favorite, a reason for measuring equity performance is to support learning and adaptive management. And so the focus here is really on monitoring and reviewing or evaluating outcomes of actions taken. So did this program make a difference in the way that we expected? And this includes trying to identify failures. So in this case, we'll need indicators that tell us when something doesn't work and what lessons we can learn from that, which could be politically awkward or damaging. And so this is often not something that's very attractive to policymakers or practitioners, or at least their bosses, in the real world. But it's certainly something that would help us move this field further in a more data-informed way.
So again, ultimately, indicators and metrics need to be chosen based on the objective of the decisionmaker, and there are a lot of different legitimate reasons for choosing different indicators and metrics.
So if I just have a couple of minutes to finish up here Jessie, I wanted to dive into a direction for future research based on emerging work. And again, it comes back to my effort to distinguish social vulnerability from equity and what that means for measurement. So hopefully by now, you've understood that one of my key messages is that when assessing equity, one size does not and should not fit all for every situation. And so I want to leave you with the idea that, to adequately address the task at hand facing all federal agencies right now—and plenty of other agencies, or both government and non-government organizations—where we really need to demonstrate and measure progress toward equity, we won't be served well by just applying our old thinking. Rather, the need is to consider different approaches that meet our current needs and purposes.
So for instance, in this slide here, I've noticed that what I call a rather blanket use of existing approaches to identifying social vulnerability relates to a historic and research-based effort just to identify spatial differences in vulnerabilities and understanding that there's a lot of data to collapse and try and summarize. Indices have been very valuable in this space, but it's really been for the purpose of identifying vulnerable individuals or community groups, and especially from a spatial sense. But remember that indices like the SoVI, SVI, EJSCREEN, and there's a ton of other ones for vulnerability and resilience, these were not developed as policy-making tools to offset technical risk assessments, which is what we're really motivated by here. Rather, they were just developed to identify groups that needed help. For instance, to help pre-position resources to prepare for an event—a disaster event, for instance. And we know now from more recent research that there is some validity and reliability concerns that need attention.
So importantly, we know that when we characterize the status of the community in the present, this in itself is the culmination of myriad past decisions and circumstances. So it's another matter entirely to be thinking in terms of changing the status quo going forward in time. And in this latter context, it really matters about how those past policies and circumstances are understood as root causes that need to be tackled. So moving forward, I think what we really need to focus on is developing measures that help us understand and capture the drivers or mechanisms of vulnerability and inequities. Which, existing in the face of vulnerability, won't necessarily help us isolate, because by nature, they collapsed across a lot of these different variables. So to develop strategies to address the mechanisms of inequities, we're really going to have to use new approaches for a deeper understanding of how disasters differentially impact different groups.
Jessie, how much time do I have? Do I still have a little?
Yes. You have 25 minutes, and we've got a couple of questions in the queue here.
Okay, well, let me just summarize, this is just one example of what I call a new approach that I'm excited about, just to give people a flavor of, well, what else would we use? Where can we go with this? But I'll keep it short so we can get to some Q&A and discussion.
So, this new approach that I have cited here is the emerging measure of well-being developed by Maryia Markhvida and Jack Baker from Stanford and Stephane Hallegatte and Brian Walsh from the World Bank. And they're really using a welfare economics approach to define well-being loss as a measure of the utility of consumption lost during a household's recovery from a disaster shock, accounting for asset losses and changes relative to initial income and wealth of the household.
So essentially, what they're doing here is using a multi-stage simulation that starts with your traditional assessment of damage to the built environment, as we've done traditionally in risk assessment. But then they assess the effect of that damage on productivity across economic sectors, using a dynamic adaptive regional input-output model. And then they assess the impact on loss of employment and income at the individual level, producing the ripple effects through supply chains, which we're all experiencing right now during, or as the pandemic moves on and hopefully gets resolved in the coming months. And then they calculate the well-being losses at the household level, considering the unique socioeconomic characteristics such as initial assets and income levels.
So basically, this approach is trying to address the bias in our standard asset loss metrics, risk assessment approaches, or our benefit-cost assessments that we've used and relied on heavily up to this point, which have emphasized impacts on the wealthy. But by incorporating this new type of approach, we'd really allow for risk reduction strategies to address the drivers of vulnerability, such as low wealth levels or volatile income sources.
So I think there are a lot of unanswered questions, which will probably please researchers and frustrate policymakers and practitioners. But I think it's important to unearth some of these questions, because all federal agencies and other organizations in the public and private sector are really grappling with these challenges of, how do we improve equity? How do we improve the equity performance of our programs or our efforts? And how do we ensure that the resources we have are getting to the most vulnerable members of our communities, especially in these very difficult disaster contexts?
So I mentioned Justice40, I think that's, you know, certainly inspiring and motivating a lot of work right now across federal agencies. And one of the questions that comes up is, how do we address initiatives like Justice40 in a consistent way across programs, but yet allow for the different needs and target audiences and resources available and, you know, mission or goals that these programs have? How do we allow for differences in not only the equity goals, but the mechanisms and the measures across programs focused on different needs?
What have we learned from prior investments and alternative approaches that might help us in this uncharted territory as we move forward? And how can we do that in an urgent manner?
And then finally, what do we do while data and methods catch up with policy? How do we address gaps or fill in lacking data when we need baselines to know whether and how we've moved from one situation to another, hopefully more equitable, situation?
I do have a lot of suggested reading for anyone interested, and the glossary of terms as I've used them. But I think at this point, Jessie, I'd be happy to open for discussion and Q&A.
Great. So someone is asking, To what degree are measures of social vulnerability and equity really just proxies for poverty? My cynical interpretation of these conversations is that they are a way of talking around the obvious, but much more politically charged, solution of "give money to people who don't have it." But I'd be happy to be convinced otherwise.
That's a great question. So, looking at the research literature, or the discussions in more-academic circles, it's like a cycle. So sometimes we say exactly that, that you know, the issue basically is poverty, and nobody wants to talk about poverty, and it's hard to solve because it is multifactorial and multidimensional.
And then the conversation moves to, well, what are the drivers or the the key factors leading to that poverty, either historically or maintaining the poverty and the difficulties for lower-income, poor communities and individuals? And so in that conversation, it says, well, we have to look at the correlates and the socioeconomic or demographic variables in information that we have.
So I think that whoever asked that question is right; it is fundamentally about poverty. But I think we know a lot about things that correlate with or, in fact, cause poverty and cause it to be maintained and exacerbated, especially in disaster circumstances. So we are talking about that, and there are different schools of thought as to how we can be most effective in addressing that problem.
Great. So another question is, any pointers on how to evaluate equity for tribes in consideration of data sovereignty?
So at the outset, I had as one of my key takeaways that we need to develop indicators, metrics, and frameworks for assessing equity performance in consultation with impacted communities. And in this particular instance, there are additional sensitivities and needs, especially to avoid entrenching, you know, historical injustices and challenges in this scenario.
So my broad suggestion would be, we need to consult with tribes or indigenous groups or others that have certain value systems or ways of understanding how different factors in our society interact in particular and impact their situation, especially in disaster contexts. And I think it's through that discussion, and it would be really focused on the processes of, you know— because I still, I think evidence-based approaches are critical for helping us have a robust process that allows us to quantify and measure our progress and also talk to each other about things that we can get our hands around. This is not to negate the importance of qualitative work, which can also be quantified. But understanding how those data, whether they're qualitative or quantitative, are developed and managed—by which I mean stored or used or made available for these kinds of processes—I think has to be done in collaboration with some sensitivities around those particular needs.
Great. So another question here is that, my pet theory is that it is easier and less messy to invest for equity earlier in the disaster cycle (during mitigation) rather than later (for example, recovery). Any value in that?
Yes. I couldn't agree more. I think that the short-term versus long-term tradeoffs are challenging and difficult. But if we can think more holistically and include temporal dimensions of, you know, assessing the value of different efforts, I think that putting a lot more resources into mitigation would save us a lot of money that we are currently spending on response and recovery efforts.
All right. So we have another question here: You stated something earlier to the effect that use of indices like SVI are discouraged, as some of the elements don't paint the full picture. Of course, nothing is perfect, but can you address that aspect again and what you're suggesting instead of these?
Yes. So my concern with any index is that by nature, it's designed to collapse a lot of information. And it can be done statistically, or you can do it through, you know, expert consensus. There are different ways that you can create your index. And you can have, you know, different factors as part of the index, and so on. But at the end of the day, when we're seeing research, especially recently, come out that shows that these index values, at least at the county level, are sensitive to the information—the data that is used to create the values—in a way that makes them flip-flop from making a county appear as vulnerable in one analysis versus not vulnerable in another analysis, I think that's not helpful for policymakers who are trying to say, well, is this area or county vulnerable or not?.
So, I mean, that's a whole other talk as to why this happens. I think in that particular Spielman paper that I'm thinking of, it's— the size of the data set was either state level, regional level, or national level. And it's a contextual issue that when you look at, say, income in an area relative to the nation versus relative to other areas in its state, it just looks very different. So, data are very context sensitive, I think, is the way that I summarize these issues. And so as a result, when we collapse a lot of data, it can obscure the decisions or the assumptions made and lead policymakers astray in some situations, if you don't fully grasp exactly how all those decisions have been made to create that number.
So my alternative suggestion— and I guess the other thing, too, is these indices were not designed for that decisionmaking or policymaking context. They were designed for other purposes. So as we move forward trying to identify target groups or areas—geographic, or, you know, or communities—that we feel our resources are going to help, and we have, you know, a logic model that tells us why we think— what's the chain of reasoning that gets us from our resources or activities to a more equitable space? I think it's really important to articulate the elements that are relevant to your resources, your programs, and to the particular outcomes that you're looking at.
It's not— I mean, our goal in reducing inequities is not to make those index numbers increase or decrease or get flatter. Our goal is to help people. And by that, we have to be very specific about what we mean. Do we mean people need transport to be more accessible? They need health outcomes, they need their families or their kids to have health outcomes that don't depend on your zip code? Does it mean that we don't have food insecurity in some communities that's greater than in others after a disaster? That some can build back more quickly, that their power gets turned on more quickly than others? So these are the kinds of lower-level, or more specific, things that I think are important to consider as we identify, well, which groups sociodemographically, or which variables are of interest to us? And why does that matter in the context of that particular outcome that I'm interested in and the resources that I have to spend on programs that would benefit or improve those groups' outcomes on those particular dimensions?
Excellent. So we've got a couple of other questions related to this topic of, how do we manage to do analysis today when there's a data gap? So one of them is asking if— there are a number of data sets that may not be accessible or useful on the timescale that FEMA works on. For example, a two- or five-year update may be too long for the analytic inquiries FEMA has. Have you seen any indicators or measures, perhaps that household well-being measure, that are available on a more frequent timescale?
Right. So this is a real challenge that I don't think there's an easy answer to. So if you just think about the census being updated, even if it were updated annually—you know, the full census is only done every 10 years, and there are mechanisms by which we try and update information like that more frequently. But just given the limited resources we have to collect those data at the national level, that is always going to be a challenge. And so I think we do need to look at more analytic strategies to just test the sensitivity of what we're doing. Because, you know, perhaps it doesn't matter at the national level, but we have to do the analysis to make sure that that's true.
So at the local level, I think the possibility is that we could gather data more frequently because it's a smaller area, or a number of people, that we're focused on. Of course, we'd have to make sure the resources are allocated for that particular activity. And I think that's important, especially if we want to assess change over time as a result of the program that's being implemented, or projects being implemented. But the limitation of that is, of course, it's not necessarily representative of a larger geographic area, such as a state or, you know, the whole nation.
So a program like FEMA, or an agency like FEMA, has the challenge that it is doing two things at once. It is both trying to implement programs like BRIC at a national level and see what equity outcomes look like at that national level across all the projects that it's funding. But at the project level, the scope, or the scale, of the impact is going to be at a much smaller scale or unit. So I think one of the challenges is just recognizing those different levels and understanding that we can— if we don't have data available at the national level, we may be able to get it at the local level. But then the ability to generalize from that needs some qualifications.
So no easy answer, but just recognizing that there are different purposes or needs. And perhaps we don't all need the same quality or frequency of data at the same scale for all purposes.
Thank you, Melissa. So you may have already answered this question, but there might be some elaboration that could further inform this. Somebody is asking, what is your best advice for filling in the baseline data gaps, since basing algorithmic analysis off of information that doesn't exist or we don't have is very difficult?
Yeah, again, there's no easy answer to that one. I think, you know, the best suggestion is: gather the baseline data now for, you know, future analyses down the road. But I think you have to triangulate, or use a number of different approaches, to get at the best understanding that you can possibly get and have a sense of how well you're doing that. So there is not going to be one particular analytic approach or technique, but you know, comparing your analysis—if it's a statistical analysis, filling in missing data or using other statistical techniques—against, you know, some qualitative check from other data sources. If there's some historic record. You know, it really depends on, you know, are you measuring the depth of floods? Are you trying to fill in, you know, for instance, data on race and ethnicity that was just never collected on the census at a certain particular point in time because, well, for a variety of reasons? But I think using multiple methods to check and double-check our estimates, and then having a sense of our confidence in those estimates, is really important so that policymakers or whomever is using those numbers can put the appropriate bounds or qualifications around the way in which they use the numbers.
Great. A question about whether or not you have reviewed disaster programs that currently use income metrics to target socially vulnerable communities and how successful they are, such as HUD's CDBG-DR program.
Yeah, I have not reviewed those. So income is a challenging metric because there are many different types of incomes, sources of incomes, levels, you know, we are talking individual households. And at some point, I think, for policymaking, we have to figure out, are we getting any more juice from the squeeze? Are we really learning more about vulnerabilities or vulnerable communities by, you know, getting more refined in our measures of income? I think also that, related to that, our measures of wealth—which are even harder to establish but also very important, especially since our risk assessment and benefit-cost analysis is so biased toward, you know, asset values and wealthy communities—that a better understanding of how we measure wealth and how we could target programs relative to people's initial wealth or their starting point and how disaster events impact them relative to those wealth and income conditions is really important. But yeah, lots more research is needed for that.
OK. So we have a handful of questions about when the recording will be available or if we can share the slides. So I'd like to address that. The slides were derived from a report that is publicly available, so you can get all of this material from the report. And I don't know if we can post a link to that currently, but if you cannot find it, please—
Jessie, it's up on the screen now; it's the third suggested reading.
Finucane et al. It's a scoping literature review.
Great. You know, if anyone has issues or challenges, please feel free to email me. It is email@example.com—R-I-P (as in Peter)-O-S-O at rand.org—and I would be happy to email you the report.
Yes. OK, well, we only have a couple of minutes left and probably not enough time to answer the remainder of these questions, which look like they could take us off into lengthy discussions. So I just want to thank everyone for attending today. And also thank you, Melissa, for this wonderful presentation, and I really look forward to continuing our work in this area and helping us make progress in this domain. So thank you all.
Great, thank you, Jessie, for the opportunity. Thank you to our audience for your questions and interest.