Getting to Outcomes: Step 08. Outcome Evaluation

This step helps with planning an outcome evaluation and using the results from it. An outcome evaluation reveals how well you met the goals and desired outcomes you set for the program in Step 2.

What Is This Step?

GTO Step 8 involves evaluating how well the program achieved the intended outcomes. Did the participants in the program change on the desired outcomes, such as knowledge, attitudes, and behaviors? This step is called outcome evaluation because the collected data track the desired outcomes of the program, as opposed to the process of program delivery (GTO Step 7). The outcome evaluation should be planned before the program begins and should have specific time points for data collection, such as before and after the program has gone through a complete cycle.

Why Is This Step Important?

The purpose of Step 8 is to understand whether you have met the goals and desired outcomes established in GTO Step 2. Combined with the results of your process evaluation (GTO Step 7), this step will begin identifying program areas for improvement to help address those missed outcomes in an effort to improve the program while maintaining achieved ones. Outcome evaluation results can help you demonstrate the effectiveness of your program to your funders and other stakeholders.

How Do I Carry Out This Step?

In GTO Step 8, you need an evaluation design and a data collection and analysis plan, including a measurement tool (e.g., a pre-/post-survey), a target population to be measured (e.g., all the participants in the program), a timeline for when to collect the data (e.g., from the pre-/post-survey), a plan for entering the collected data (usually into a spreadsheet), and a plan for analysis to determine whether outcomes were achieved (e.g., the change from the pre-survey to the post-survey). Outcome evaluations can be complex and costly and are often intimidating for program staff. This guide is meant to assist with simple outcome evaluations. If you want to carry out more-complicated outcome evaluations, you may need to get help from a trained program evaluator.

A design is a term for the type of evaluation you will conduct. The type of design guides when you collect data and from which groups. For example, a simple and inexpensive design uses a questionnaire to collect data from program participants just before a program begins and after a complete program is completed (often called a pre-/post-). This design might be appropriate to assess competency as an outcome of a train-the-trainer program. Another type of design, called the pre-/post- with comparison group, compares program participants with a similar group not receiving the program during the same time period. This way, you can be sure that any changes taking place in the participants getting the program from pre- to post- were real and did not happen to all nonparticipants (i.e., if both groups improve the same amount, then the program did not have an effect). This improves confidence that differences were due to the program and not to something else. That is why this design is a stronger way to evaluate whether the program led to changes in knowledge, attitudes, or behaviors over time. However, this design is more complicated, so you may want to consult a program evaluator. Finally, sometimes you may only be interested in how participants did in a program at the end. Surveying participants only at the end is called a post-only design. It is the easiest to do, but it is the weakest type of evaluation because you have no information about how much change occurred before the program started and it only includes participants who completed the program.

In outcome surveys, individual survey questions are often grouped together into topical categories called scales. For example, a knowledge scale may include several questions assessing different types of knowledge. The question responses can be combined to form a scale. Then, the analysis of these data can also be done easily by scoring each scale, calculating the average for the group surveyed, and then comparing the pre- and post- scores.

Tip 8-1. Examples of survey questions for different outcome areas

Tip 8-1. Examples of survey questions for different outcome areas

Examples of survey questions for different outcome areas
Outcome Area Survey Question(s) Response Options

Perceived level of preparedness

How well prepared do you feel your household is to handle a large-scale disaster or emergency?

Well prepared, somewhat prepared, or not at all prepared

Preparedness kit

  • Water supply: Does your household have a 7-day supply of water for everyone who lives there? (1 gallon per person per day)
  • Food supply: Does your household have a 7-day supply of nonperishable food for everyone who lives there? (Food that does not require refrigeration or cooking)
  • Medication supply: Does your household have a 7-day supply of prescription medication for each person who takes prescription medications?
  • Battery-operated radio: Does your household have a working battery-operated radio and working batteries for your use if the electricity is out?
  • Flashlight with batteries: Does your household have a working flashlight and working batteries for your use if the electricity is out?
  • Personal comfort item: Does your preparedness kit have a personal comfort item for each household member, such as chocolate, a picture of loved ones, or a stuffed animal?

Yes, no, don’t know/unsure

Communication plans

  • Communication with family: In a large-scale disaster, what would be your main method or way of communicating with relatives and friends?
  • Communication with authorities: What would be your main method or way of getting information from authorities in a large-scale disaster or emergency?

Communication with family:

  1. Regular home telephones
  2. Cell phones
  3. Email
  4. Pager
  5. 2-way radios
  6. Other
  7. Don’t know/unsure

Communication with authorities:

  1. Television
  2. Radio
  3. Internet
  4. Print media
  5. Neighbors
  6. Other
  7. Don’t know/unsure

Evacuation plans

  • Mandatory evacuation compliance: If public authorities announced a mandatory evacuation from your community due to a large-scale disaster or emergency, would you evacuate?

Yes, no, don’t know/unsure

Reason if noncompliance: What would be the main reason you might not evacuate if asked to do so?

  1. Lack of transportation
  2. Lack of trust in public officials
  3. Concern about leaving property behind
  4. Concern about personal safety
  5. Concern about family safety
  6. Concern about leaving pets
  7. Concern about traffic jams and inability to get out
  8. Health problems (could not be moved)
  9. Other

Organizational ties

  • Formal support: Do you currently belong to a community organization (such as a school, church or other faith community, or a volunteer organization) that you can depend on in a disaster?

Yes, no, don’t know/unsure

In the past year, have you
  • received any specific preparedness trainings, such as first aid, CPR [cardiopulmonary resuscitation], mental health first aid, or community preparedness?
  • participated in an emergency drill or exercise?
  • conducted outreach to neighbors or friends to talk about preparedness?
  • volunteered in a community activity related to disaster preparedness (e.g., outreach event, attended a presentation)?
  • identified an organization you can rely on during a disaster?

Yes, no, don't know/unsure

In addition, the Final Report of the 2007 Public Health Response to Emergency Threats Survey (PHRETS) (PDF) , conducted in Los Angeles County by the David Geffen School of Medicine at UCLA, includes all the PHRETS questions on individual, workplace, school, and daycare preparedness and shelter in place, evacuation, and communications plans.

Tip 8-2. Data collection methods for measuring desired outcomes

Tip 8-2. Data collection methods for measuring desired outcomes

Data collection methods for measuring desired outcomes
Methods Pros Cons Cost
Surveys Self-administered surveys
  • Anonymous
  • Inexpensive
  • Easy to analyze
  • Standardized
  • Easy to compare with other data
  • Could be biased if respondents do not understand the questions or answer honestly
  • May not have very many responses; some respondents may not answer all of the questions
Low to moderate
Telephone surveys
  • Easy to analyze
  • Standardized
  • Easy to compare with other data
  • Same as above, but those without phones may not respond
  • Others may ignore calls
Moderate to high, depending on number of surveys to complete
Face-to-face structured surveys Same as self- administered, but you can clarify responses Same as self-administered but requires more time and staff time High
Recorded interviews
  • Objective
  • Quick
  • Does not require new participants
  • Can be difficult to interpret.
  • Data are often incomplete.
Low
Open-ended interactions Open-ended face-to-face interviews Gather in-depth, detailed info. Info can be used to generate survey questions
  • Takes much time and expertise to conduct and analyze
  • Potential for interview bias
  • Low to moderate if done in house
  • Cost can be high if hiring outside interviewers and/or transcribers
Open-ended questions on a written survey
  • Can add more in-depth, detailed info to a structured survey
  • People often do not answer them
  • May be difficult to interpret the meaning of written statements
Low
Focus groups
  • Can quickly get info about attitudes, perceptions, and social norms
  • Info can be used to generate survey questions
  • Cannot get individual-level data from focus group
  • Can be difficult to run
  • Hard to generalize themes to larger group
  • May be hard to gather
  • 6–8 persons at same time
  • Sensitive topics may be difficult to address in a focus group
  • Low if done in house
  • Cost can be high if hiring a professional
  • Usually incentives are offered to get participants
Other Observation (of children, parents, program staff)
  • Can provide detailed information about a program, a family, etc.
  • Observer can be biased
  • Can be a lengthy process
Low to moderate if done by staff or volunteers

Source: Adapted from Hannah, McCarthy, and Chinman, 2011.

Tip 8-3. Reporting evaluation results for different audiences

Tip 8-3: Reporting evaluation results for different audiences

Obviously the most important reason we evaluate what we’re doing is because we want to know whether we’re having an impact. However, sharing our results in simple, meaningful ways can have other useful impacts as well. Keep in mind that different groups of stakeholders may be interested in different types of information. The general public may be less interested in lots of data than funders or local policymakers are. In this tip we have included some different ways that information might be reported for different audiences.

Reporting evaluation results for different audiences
Funder Information of interest Example of reporting method
Funder Whether the program is working
  • Detailed report with executive summary of findings
  • Grant application (if applicable)
Community members
  • Whether the program is working
  • How the program can be improved
  • How the program is impacting community members
  • Whether the program is working
  • How the program can be improved
Agency staff Whether the program is working. How the program can be improved Detailed report with executive summary of findings

Tools Used in This Step

The Outcome Evaluation Planner Tool

The Outcome Evaluation Planner Tool will help you plan your Outcome Evaluation.

The Outcome Evaluation Planner Tool

Instructions

This tool will help you plan how to carry out your outcome evaluation. While this tool allows you to create your own outcome evaluation survey items, we recommend that, whenever possible, you choose measures that already exist and have been used to evaluate programs like yours. Some programs have their own outcomes survey. With this tool, you can also choose your design (i.e., pre-/post-, pre-/post- with comparison group).

  1. Make as many copies of the tool as necessary so that you have a row for each of your program’s outcomes.
  2. Review the desired outcomes statement from the SMART Desired Outcomes Tool you completed in GTO Step 2, and copy each desired outcome into the first column.
  3. Check the appropriate box in the Evaluation Design column to indicate your choice of evaluation design for each outcome.
  4. Next, identify the scales and/or existing or new questions that you will use to measure each of your desired outcomes statements. See resources in this guide, literature, and manuals for programs like yours for examples.
  5. Select a measure that can be used to assess each desired outcome. Enter this in the next column.
  6. In the next column, indicate from where you are pulling the scale or questions (for example, your program’s survey).
  7. In the last column, enter “All” if you are using all the items in the scale, or enter the number of items from a scale that you will use.
  8. With this tool completed, you can construct your outcome survey questionnaire. Add any additional questions, such as demographics or level of participation or satisfaction, that you also decide to measure.

Example

  • Completed by: Project team/evaluator
  • Date: April
  • Program: ROAD-MAP
Outcome Evaluation Planner Tool (filled out for demonstration purposes)
Desired Outcome Evaluation Design Scale Name/Questions Source of Scale/ Questions Items to Include

To increase the number of program participants indicating that they possess a 7-day household emergency water supply by 20% from baseline to follow-up (3-month period)

Example Checklist:
  • Unchecked: Pre-/post- with comparison group
  • Checked: Pre-/post-
  • Unchecked: Post- only

Behavioral Risk Factor Surveillance System (BRFSS) General Preparedness Module

Q2: Does your household have a [7] day supply of water for everyone who lives there? A [7] day supply of water is 1 gallon of water per person per day.
  • 1: Yes
  • 2: No
  • 7: Don’t know/Not sure
  • 9: Refused

BRFSS, 2012

Question 2 from BRFSS General Preparedness Module

To increase the number of program participants who regularly take prescription medication who possess a 7-day extra supply by 15% from baseline to follow-up (3-month period)

Example Checklist:
  • Unchecked: Pre-/post- with comparison group
  • Checked: Pre-/post-
  • Unchecked: Post- only

California Health Interview Survey (CHIS) Emergency Preparedness

QA09_EM1: Do you take any medicine daily that a doctor prescribed?
  • 1: Yes
  • 2: No
QA09_EM2: Do you have at least an extra [one] week supply of all the prescription drugs you take every day?
  • 1: Yes
  • 2: No
  • –8: Don't know
  • If No:
    Could you get an extra [one] week supply of all your prescription drugs?
    • 1: Yes
    • 2: No
    • –8: Don't know
    • If No:
      What is the main reason you would not be able to get an extra supply of your prescription drugs?
      Don't know

CHIS, 2009

Questions EM1 & EM2

Download

The Outcome Evaluation Summary Tool

The Outcome Evaluation Summary Tool will help you interpret the results of your Outcome Evaluation.

The Outcome Evaluation Summary Tool

Instructions

This tool helps interpret your survey data to see how much change you achieved on the desired outcomes. With this tool you can summarize your pre- and post- scores for your program participants and a comparison group (if you have one).

  1. Make as many copies of the tool as you need.
  2. Copy over your measures (scales of questions) from the Outcome Evaluation Planning Tool.
  3. Enter the results from your survey instruments in the remaining columns.
  4. If you have pre-program data, calculate the pre-program averages for the participants in two parts:
    • First, apply the scoring rule on each scale for each participant.
    • Second, calculate averages across all participants for each scale or item. For each scale, add the scale scores for each participant together, then divide by the number of participants. Place this final number into the Pre-Program Score column of the tool in the space labeled “Program.” Do the same for single items.
  5. Repeat the same procedure to generate post-program averages, if you have post-program data.
  6. If you have data for a comparison group, you will need to calculate pre- and post- averages for each scale and enter them into the tool in the space labeled “Comparison” (below the participants’ scores) or write in “Not applicable” (N/A).
  7. For each scale, calculate the percentage change from the pre- to post- averages:
    • Subtract the pre-program average from the post-program average.
    • Divide the result by the pre-program average.
    • Convert to a percentage (you can do this by multiplying by 100).
  8. If you used a comparison or control group, calculate the percentage change for that group as well (for each scale), and enter it in the appropriate column.
  9. Briefly summarize the meaning of each result in the Interpretation column. For example, if there is a 50-percent increase in knowledge among the program participants but only a 10-percent increase in the comparison group, you might interpret this greater positive change as a result of the program.

Example

  • Completed by: Project team/evaluator
  • Date: August
  • Program: ROAD-MAP
Outcome Evaluation Summary Tool (filled out for demonstration purposes)
Item/Scale Name Pre-Program Score Post-Program Score Percentage Change [(post- minus pre-) divided by pre-] Interpretation
Does your household have a 7-day supply of water for everyone who lives there? A 7-day supply of water is 1 gallon of water per person per day.
  • 1: Yes
  • 2: No
  • 7: Don’t know/Not sure
Program: 56%
Comparison: N/A
Program: 75%
Comparison: N/A
33.9%
The number of program participants who have a 7-day household water supply increased by 33.9% from pre-program evaluation to follow-up.
Peer-Mentored Preparedness (PM-Prep) Preparedness Index (Q1, Q6, and Q7)
Program: 35%
Comparison: N/A
Program: 70%
Comparison: N/A
Program: 100%

The number of program participants with a household communication plan increased by 100% from pre-program evaluation to follow-up.

Do you take any medicine daily that a doctor prescribed?
  • 1: Yes
  • 2: No
  • –8: Don’t know
Program: 85%
Comparison: N/A
Program: 85%
Comparison: N/A
N/A

This is not an outcome measure, but rather an introductory question before asking the two outcome questions.

Do you have at least an extra [one] week supply of all the prescription drugs you take every day?
  • 1: Yes
  • 2: No
  • –8: Don’t know
Program: 14% Program: 16% 14.2% The number of program participants who regularly take prescription medication who have a 1-week extra supply increased by 14.2% from pre-program evaluation to follow-up.
Notes:
  • Program: Scores for the group of participants who received the program.
  • Comparison: Scores for the group of participants who did not receive the program.
Download

When these are complete, you will be ready to undertake program improvement using GTO Step 9.

Step Checklist

When you finish working on this step, you should have:

  • Completed the Step 8 tools
  • Identified the questions you want the evaluation to answer
  • Chosen the measures you want to collect
  • Developed methods to use in the outcome evaluation
  • Developed and finalized a plan to put those methods into place
  • Conducted the outcome evaluation (collected your data)
  • Analyzed data and interpreted your findings
  • Reported your results

Before Moving On

You should have some idea at this point whether you have actually achieved your desired outcomes. The final two steps in this process will help you reflect on what you’ve done, fine-tune your work before you conduct your program again, and bring together a set of ideas about how to sustain your work.

Up Next:

Step 09. Continuous Quality Improvement (CQI)

This step provides a framework for using process and outcome evaluation data to make program improvements.

View Step