Download

Download eBook for Free

FormatFile SizeNotes
PDF file 0.9 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Download Support Files

Metaevaluation Checklist

FormatFile SizeNotes
zip file 0.1 MB

The file(s) provided above are ZIP-formatted archives, which most modern systems can natively unpack. If your computer does not unpack the archive when you double-click it, you may need to use a separate decompression program such as UnZip.

Purchase

Purchase Print Copy

 FormatList Price Price
Add to Cart Paperback128 pages $27.50 $22.00 20% Web Discount

Research Questions

  1. What are good practices for assessing U.S. Department of Defense (DoD) inform, influence, and persuade (IIP) efforts in terms of their effectiveness, cost-effectiveness, and extent to which they support the larger goals of military campaigns?
  2. What can IIP planners and assessment practitioners learn from commercial marketing, public communication, academia, and other sectors, and which approaches are particularly applicable to DoD IIP activities?
  3. How can planners ensure that they are reaching stakeholders and decisionmakers with necessary information about the outcomes of IIP efforts presented in the right way?
  4. How can DoD better support the assessment of IIP efforts? And how could better assessments lead to more effective and efficient IIP efforts?

To achieve key national security objectives, the U.S. government and the U.S. Department of Defense (DoD) must communicate effectively and credibly with a broad range of foreign audiences. DoD spends more than $250 million per year on inform, influence, and persuade (IIP) efforts, but how effective (and cost-effective) are they? How well do they support military objectives? Could some of them be improved? If so, how? DoD has struggled with assessing the progress and effectiveness of its IIP efforts and in presenting the results of these assessments to stakeholders and decisionmakers. To address these challenges, a RAND study compiled examples of strong assessment practices across sectors, including defense, marketing, public relations, and academia, distilling and synthesizing insights and advice for the assessment of DoD IIP efforts and programs. This handbook was designed to be an easy-to-navigate, quick-reference guide to planning and conducting assessments of DoD IIP efforts, analyzing the data generated, and presenting the results. It also offers some background on current assessment practices in DoD and the typical users and uses of DoD IIP assessment results. A companion volume, Assessing and Evaluating Department of Defense Efforts to Inform, Influence, and Persuade: Desk Reference, offers a more detailed exploration and additional examples of assessment in practice.

Key Findings

Across Sectors, Best Practices for Assessing Efforts to Inform, Influence, and Persuade Efforts Adhere to a Handful of Common Principles

  • Effective assessment requires clear, realistic, and measurable goals.
  • Effective assessment starts in planning.
  • Effective assessment requires a theory of change or explicit logic of the effort that connects activities to objectives.
  • Change cannot be measured without a baseline.
  • Assessment over time requires continuity and consistency.
  • Assessment is iterative, not something planned and executed once.
  • Assessment requires resources, but any assessment that reduces the uncertainty is valuable.

DoD Has Historically Struggled to Assess the Progress and Effectiveness of Its IIP Efforts

  • There is a lack of shared understanding about how IIP efforts function, which broadens the scope of the assessment questions asked. Good accountability assessments would show not only show that these efforts support broader military campaign and national security goals but also how they do so.
  • In complex operating environments, IIP efforts often face constraints, disruptors, and unintended consequences. Good assessment can help predict these challenges and overcome them when they do arise.
  • Good assessment can support learning from both success and failure. Well-designed, early assessment can help identify problems and get a struggling IIP effort on a path to success.
  • Organizations that do assessment well have cultures that value assessment. Organizing for assessment involves dedicating the necessary resources to the assessment process (5 percent is a common benchmark); ensuring leadership buy-in, advocacy, and willingness to learn from assessment results; training assessment personnel; and implementing a system of continuous assessment, data collection, and program change in response to assessment results.

Recommendations

  • Practitioners require and should demand specific, measurable, achievable, relevant, and time-bound (SMART) campaign objectives for the purposes of assessment. If program and activity managers cannot provide objectives that meet these standards, assessment practitioners should infer or create their own.
  • Practitioners should be explicit about the theory of change or the underlying logic of the effort that connects DoD IIP activities to objectives. If commanders or program designers have not outlined an explicit logic for an activity, assessment practitioners should elicit or develop one in support of assessment.
  • Practitioners require resources for assessment. Assessment is not free, and if its benefits are to be realized, DoD must provide the necessary resources.
  • Practitioners must take care to match the design, rigor, and presentation of assessment results to the intended uses and users. Assessment supports decisionmaking, and providing the best decision support possible should remain at the forefront of practitioners' minds.
  • When complex environments or human dynamics inject uncertainty into IIP efforts, fail fast. That is, field an experimental effort with rapid assessment with the intention of learning from initial shortcomings and making corrections until the effort is effective.

Table of Contents

  • Chapter One

    About This Handbook

  • Chapter Two

    Assessment Best Practices and Applying Them to DoD IIP Efforts

  • Chapter Three

    Why Evaluate? An Overview of Assessment and Its Uses

  • Chapter Four

    Determining What's Worth Measuring: Objectives

  • Chapter Five

    Determining What's Worth Measuring: Theories of Change and Logic Models

  • Chapter Six

    Developing Measures for DoD IIP Efforts

  • Chapter Seven

    Designing and Implementing Assessments

  • Chapter Eight

    Formative and Qualitative Research Methods for DoD IIP Efforts

  • Chapter Nine

    Surveys and Sampling in DoD IIP Assessment: Best Practices and Challenges

  • Chapter Ten

    Measurement: Collecting IIP Outputs, Outcomes, and Impacts

  • Chapter Eleven

    Presenting and Using Assessment

  • Chapter Twelve

    Developing a Culture of Assessment

  • Chapter Thirteen

    Conclusions and Recommendations

This research was conducted within the International Security and Defense Policy Center of the RAND National Defense Research Institute, a federally funded research and development center sponsored by the Office of the Secretary of Defense, the Joint Staff, the Unified Combatant Commands, the Navy, the Marine Corps, the defense agencies, and the defense Intelligence Community.

This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.