Download eBook for Free

FormatFile SizeNotes
PDF file 3.2 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.


Purchase Print Copy

 Format Price
Add to Cart Paperback146 pages $37.00

Research Questions

  1. Which evaluation measures and evaluation approaches have been used and tested by previous evaluation studies, can they be applied to CVE programs, and how can program staff incorporate them into their own evaluations?
  2. How can program staff design an evaluation that is appropriate for their program type and available resources and expertise?
  3. What are the most useful ways to analyze the data resulting from an evaluation of a CVE program's effectiveness, and how can these data be used to make improvements?

Violent extremism poses a threat both within the United States and internationally. Countering violent extremism (CVE) requires addressing the conditions and reducing the underlying factors that give rise to radicalization and recruitment. Community-based CVE programs may engage in a variety of activities and target different populations, making it challenging to evaluate their effectiveness. Many of these programs also operate with limited resources and lack evaluation expertise. The RAND Program Evaluation Toolkit for Countering Violent Extremism was designed to help program staff overcome these common challenges to evaluating and planning improvements to their programs. It begins by walking users through the process of identifying core program components and developing a program logic model to show connections between resources, activities, outcomes, evaluation measures, and the need the program addresses in its community. It then helps users design an evaluation that is appropriate for their program type and available resources and expertise, supports the selection of evaluation measures, and offers basic guidance on how to analyze and use evaluation data to inform program improvement. Through checklists, worksheets, and templates, the toolkit takes users step by step through the process of determining whether their programs produce beneficial effects, ultimately informing the responsible allocation of scarce resources. The toolkit's design and content are the result of a rigorous, systematic review of the program evaluation literature to identify evaluation approaches, measures, and tools used elsewhere and will be particularly useful to managers and directors of community-based CVE programs and program funders.

Key Findings

In Evaluating a CVE Program, It Is Important to Identify Core Program Components and Select the Right Evaluation Measures

  • This toolkit can help community-based CVE programs overcome common challenges to evaluation by identifying core program components (such as activities, target audiences, and community needs being met) and appropriate measures for their program type and available resources and expertise.
  • The program evaluation literature offers a wealth of information to guide staff through the evaluation process. This toolkit drew on prior research to develop checklists, worksheets, and templates to help users identify and implement the right evaluation approach.

A Step-by-Step Approach Is Important: The Toolkit Is Best Applied Sequentially

  • For the most accurate evaluation, it is important to follow each step in the toolkit in sequence; each step builds on prior steps to help users design the most rigorous evaluation that their program can support.
  • Beginning with the development of a logic model, the toolkit helps users identify the target audience for their program, whether the program has been effective in meeting its goals, and where the program could be improved.
  • It can be difficult to know where to start: Interactive resources, such as checklists, worksheets, and templates, are designed to give users a complete picture of their program and guide changes and improvements.

This research was sponsored by the Office of Community Partnerships in the U.S. Department of Homeland Security and conducted in the International Security and Defense Policy Centerof the RAND National Defense Research Institute, a federally funded research and development center sponsored by the Office of the Secretary of Defense, the Joint Staff, the Unified Combatant Commands, the Navy, the Marine Corps, the defense agencies, and the defense Intelligence Community.

This report is part of the RAND tool series. RAND tools may include models, databases, calculators, computer code, GIS mapping tools, practitioner guidelines, web applications, and various toolkits. All RAND tools undergo rigorous peer review to ensure both high data standards and appropriate methodology in keeping with RAND's commitment to quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.