Jan 6, 2022
Photo by Casey Williams/Clarksville Now, https://www.clarksvillenow.com.
The report, Providing Another Chance: Resetting Recidivism Risk in Criminal Background Checks, proposes the novel reset principle to guide more-accurate recidivism risk prediction for criminal background checks. The reset principle requires that a person's risk be estimated when a background check is conducted rather than at the time of their last interaction with the criminal justice system. The authors of the report also demonstrate the viability of a risk-prediction model that satisfies the reset principle.
Employers, landlords, and volunteer organizations routinely conduct criminal background checks to identify and filter candidates who might be deemed too risky to hire, rent to, or participate in volunteer opportunities. Results from these background checks are often used to justify barring people with convictions from those activities for a set period of time.
Given that roughly 30 percent of people in the United States have criminal histories, exclusions resulting from background checks can foreclose opportunities for many. But what if it was possible to show that some people pose a low risk of recidivism?
The authors of the RAND research report Providing Another Chance: Resetting Recidivism Risk in Criminal Background Checks introduce the reset principle, which provides a foundation for risk-assessment models that could help identify people whose risk of recidivism has declined. The reset principle requires that a person's risk be estimated when a background check is conducted rather than at the time of last interaction with the criminal justice system.
The reset principle states that any risk prediction model must be able to update risk estimates, i.e., be "reset" at the time of a person's criminal background check.
This principle—and risk-prediction models that satisfy it—account for the time a person has spent free in the community without a conviction, a hugely important signal of recidivism risk that may speak to their employability.
The reset principle marks a development beyond current recidivism risk-estimation approaches that are grounded in the needs of the criminal justice system. Current methods seek to determine the risk that a person poses at the time they were last convicted or released from prison. Background checks often occur many years later, when a person is applying for a job or other opportunity. In the context of criminal background checks for employment, the question therefore should be, "What level of risk does the person pose now, at the time of the background check?"
The report demonstrates the viability of creating a risk-prediction model that adheres to the reset principle. Such models may provide the foundations for creating risk-assessment tools that employers and others could use to more accurately estimate an individual's risk of recidivism. Such tools could open opportunities for people sooner than inflexible exclusions that bar those with convictions from certain activities for years.
The RAND research team developed a model that drew from a data set of convictions from North Carolina covering more than 1 million people. The team employed state-of-the-art statistical and machine-learning techniques to analyze the data and to assess the adequacy of its model for estimating recidivism risk.
In the process of developing the model, the RAND team made important observations in the data that may help alter some employers' and policymakers' outdated perceptions of recidivism risk:
These observations, which build on past research, undercut the too-common belief of "once a criminal, always a criminal."
There is a strong case that people with convictions can change. Organizations, employers, and other groups conducting background checks could benefit from more-accurate, individualized risk-prediction models and tools to identify candidates who fit their needs.
The RAND research that established such a model can be created. It is crucial to note that methods that predict recidivism risk will reflect the biases and inequities in the criminal justice data that are used to build models. Given the history of unfair systemic racial biases in the U.S. criminal justice system, any future tools that adhere to the reset principle must account for inherent factors that bear on fairness before they can be deployed. This is a key next step for researchers.
Survival functions are a statistical tool often used in the health care field to estimate the probability that patients with a certain diagnosis might survive past some point in time. The same approach can be used to estimate the probability that a person with a particular criminal history record might "survive" for some time in the community without a new conviction. The RAND research team used survival functions to compare the probability that individuals with particular criminal histories would be reconvicted after a given amount of time after their background check.
Using the North Carolina data and statistical tools called Kaplan-Meier estimators, the data were explored to illustrate general recidivism trends, and machine-learning models were used to construct survival functions that account for the data set's extensive information on criminal history.
The research report emphasizes the complexity of applying survival functions to any data set and the need for careful consideration of multiple factors when doing so. It offers five considerations for developing recidivism risk-prediction models for criminal background checks:
The RAND report details the approach to modeling the survival function against the North Carolina data set and demonstrates its validity in capturing true rates of re-offense. In verifying the model, the RAND team found that its predictions fit the data well and that it was statistically well calibrated. The limitations of the model in the report are documented in the report, and some will be important in future discussions about bias, fairness, and equity.
The RAND report should be viewed as a starting point for the research community. Further refinement of methods and data sources is required before models based on the reset principle can be integrated into tools employers and others could use in practice.
The research also identifies opportunities for policymakers to guide the development and use of these methods in the future.
Regulators should consider rules that will allow for the robust development of modern risk-assessment tools that employers can use. Equal Employment Opportunity Commission guidance to employers allows for the evaluation of job candidates using an individualized assessment of criminal history information. It calls for these screens to be validated in light of the 1978 Uniform Guidelines on Employee Selection Procedures.
The report also identifies some of the challenges with validating an accurate risk-estimation model under the Uniform Guidelines, which, when written, did not anticipate the development of algorithmic techniques, such as our risk-estimation model and hiring assessments, that are widely used today.
Policymakers and business decisionmakers must define acceptable levels of risk. The recidivism risk models can compare the levels of risk between different individuals. The next step is for an employer to define whether candidates' levels of risk are acceptable for a particular job or opportunity. Defining these risk cutoffs is a key area where bias can emerge, and thus regulatory guidance on threshold-setting would be beneficial.
Data quality can limit the development of recidivism risk models. Currently, there is no infrastructure for collecting data to support recidivism risk prediction in background checks. The development of such infrastructure should include discussions about prospectively collecting data that can be used to predict recidivism in the background-check setting.
Exploring risk-prediction models for bias is crucial. Future tools based on risk-prediction models of this type need to be evaluated for fairness throughout their development. Future work should continue to center these tools and concepts within concepts of algorithmic fairness and equity.