Download

Download eBook for Free

FormatFile SizeNotes
PDF file 2.1 MB

Use Adobe Acrobat Reader version 7.0 or higher for the best experience.

Purchase

Purchase Print Copy

 FormatList Price Price
Add to Cart Paperback84 pages $27.95 $22.36 20% Web Discount

Research Questions

  1. What are the crime reduction effects of policing guided by statistical predictions?
  2. Do policing interventions informed by predictions have any effect on crime when compared with other policing strategies?
  3. Are signs of community disorder (and other indicators) precursors of more-serious criminal activity?

Even though there is a growing interest in predictive policing, to date there have been few, if any, formal evaluations of these programs. This report documents an assessment of a predictive policing effort in Shreveport, Louisiana, in 2012, which was conducted to evaluate the crime reduction effects of policing guided by statistical predictions. RAND researchers led multiple interviews and focus groups with the Shreveport Police Department throughout the course of the trial to document the implementation of the statistical predictive and prevention models. In addition to a basic assessment of the process, the report shows the crime impacts and costs directly attributable to the strategy. It is hoped that this will provide a fuller picture for police departments considering if and how a predictive policing strategy should be adopted.

There was no statistically significant change in property crime in the experimental districts that applied the predictive models compared with the control districts; therefore, overall, the intervention was deemed to have no effect. There are both statistical and substantive possibilities to explain this null effect. In addition, it is likely that the predictive policing program did not cost any more than the status quo.

Key Findings

The Program Did Not Generate a Statistically Significant Reduction in Property Crime

  • There were few participating districts over a limited duration, thus providing low statistical power to detect any true effect of the program.
  • The null effect on crime may be due to treatment heterogeneity over time and across districts.
  • The model's predictions may be insufficient to generate additional crime reductions over traditional crime mapping on their own.

Police Officers Across Districts Perceived Benefits from the Program

  • Some prevention approaches reportedly improved community relations.
  • Predictions provided additional information and further assisted commanders and officers in making day-to-day targeting decisions.

The Program May Have Been a More Efficient Use of Resources

  • Treatment groups spent 6 percent to 10 percent less than the control groups to achieve similar levels of crime reduction.
  • Using in-house crime analytics, program costs were likely offset by reductions in officer overtime hours.

Recommendations

  • Conduct further evaluations that have a higher likelihood of detecting a meaningful effect by ensuring that police in both experimental and control groups employ the same interventions and levels of effort, with the experimental variable being whether the hot spots come from predictive algorithms or crime mapping.
  • Test the differences between predictive maps and hot spots maps. What are the mathematical differences in what is covered? What are the practical differences in where and how officers might patrol?
  • Carefully consider how prediction models and maps should change once treatment is applied, as criminal response may affect the accuracy of the prediction models.
  • Understand what types of information practitioners need to tailor their interventions to the hot spot appropriately.
  • Understand the difference between police officer and administrator predictions (based on experience) and predictive maps (based on statistics), and how the two might be combined.
  • Understand how preventive strategies are to be linked with predictions.
  • Check on how preventive strategies are being implemented in the field to ensure program fidelity.
  • Consider if it is possible to create a predictive product that goes beyond generating more-accurate hot spots to enable interventions that are fundamentally advanced beyond the sorts of directed patrols employed in this program.

Table of Contents

  • Chapter One

    Introduction

  • Chapter Two

    PILOT Process Evaluation

  • Chapter Three

    PILOT Impact Evaluation

  • Chapter Four

    PILOT Cost Evaluation

  • Chapter Five

    Conclusions

  • Appendix

    Technical Details

The research described in this report was sponsored by the National Institute of Justice and was conducted in the Safety and Justice Program within RAND Justice, Infrastructure, and Environment.

This report is part of the RAND Corporation research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

Permission is given to duplicate this electronic document for personal use only, as long as it is unaltered and complete. Copies may not be duplicated for commercial purposes. Unauthorized posting of RAND PDFs to a non-RAND Web site is prohibited. RAND PDFs are protected under copyright law. For information on reprint and linking permissions, please visit the RAND Permissions page.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.