This report documents an assessment of a predictive policing effort that was conducted to evaluate the crime reduction effects of policing guided by statistical predictions. In addition to a basic appraisal of the process, the report shows the crime impacts and costs directly attributable to the strategy in order to provide a fuller picture for police departments considering if and how a predictive policing strategy should be adopted.
Evaluation of the Shreveport Predictive Policing Experiment
Download eBook for Free
|PDF file||2.1 MB||
Use Adobe Acrobat Reader version 10 or higher for the best experience.
Purchase Print Copy
|Add to Cart||Paperback84 pages||$27.95||$22.36 20% Web Discount|
- What are the crime reduction effects of policing guided by statistical predictions?
- Do policing interventions informed by predictions have any effect on crime when compared with other policing strategies?
- Are signs of community disorder (and other indicators) precursors of more-serious criminal activity?
Even though there is a growing interest in predictive policing, to date there have been few, if any, formal evaluations of these programs. This report documents an assessment of a predictive policing effort in Shreveport, Louisiana, in 2012, which was conducted to evaluate the crime reduction effects of policing guided by statistical predictions. RAND researchers led multiple interviews and focus groups with the Shreveport Police Department throughout the course of the trial to document the implementation of the statistical predictive and prevention models. In addition to a basic assessment of the process, the report shows the crime impacts and costs directly attributable to the strategy. It is hoped that this will provide a fuller picture for police departments considering if and how a predictive policing strategy should be adopted.
There was no statistically significant change in property crime in the experimental districts that applied the predictive models compared with the control districts; therefore, overall, the intervention was deemed to have no effect. There are both statistical and substantive possibilities to explain this null effect. In addition, it is likely that the predictive policing program did not cost any more than the status quo.
The Program Did Not Generate a Statistically Significant Reduction in Property Crime
- There were few participating districts over a limited duration, thus providing low statistical power to detect any true effect of the program.
- The null effect on crime may be due to treatment heterogeneity over time and across districts.
- The model's predictions may be insufficient to generate additional crime reductions over traditional crime mapping on their own.
Police Officers Across Districts Perceived Benefits from the Program
- Some prevention approaches reportedly improved community relations.
- Predictions provided additional information and further assisted commanders and officers in making day-to-day targeting decisions.
The Program May Have Been a More Efficient Use of Resources
- Treatment groups spent 6 percent to 10 percent less than the control groups to achieve similar levels of crime reduction.
- Using in-house crime analytics, program costs were likely offset by reductions in officer overtime hours.
- Conduct further evaluations that have a higher likelihood of detecting a meaningful effect by ensuring that police in both experimental and control groups employ the same interventions and levels of effort, with the experimental variable being whether the hot spots come from predictive algorithms or crime mapping.
- Test the differences between predictive maps and hot spots maps. What are the mathematical differences in what is covered? What are the practical differences in where and how officers might patrol?
- Carefully consider how prediction models and maps should change once treatment is applied, as criminal response may affect the accuracy of the prediction models.
- Understand what types of information practitioners need to tailor their interventions to the hot spot appropriately.
- Understand the difference between police officer and administrator predictions (based on experience) and predictive maps (based on statistics), and how the two might be combined.
- Understand how preventive strategies are to be linked with predictions.
- Check on how preventive strategies are being implemented in the field to ensure program fidelity.
- Consider if it is possible to create a predictive product that goes beyond generating more-accurate hot spots to enable interventions that are fundamentally advanced beyond the sorts of directed patrols employed in this program.
Table of Contents
PILOT Process Evaluation
PILOT Impact Evaluation
PILOT Cost Evaluation
The research described in this report was sponsored by the National Institute of Justice and was conducted in the Safety and Justice Program within RAND Justice, Infrastructure, and Environment.
This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.