Getting to Outcomes
Step 08. Outcome EvaluationThis step helps with planning an outcome evaluation and using the results from it. An outcome evaluation reveals how well you met the goals and desired outcomes you set for the program in Step 2.
What Is This Step?
GTO Step 8 involves evaluating how well the program achieved the intended outcomes. Did the participants in the program change on the desired outcomes, such as knowledge, attitudes, and behaviors? This step is called outcome evaluation because the collected data track the desired outcomes of the program, as opposed to the process of program delivery (GTO Step 7). The outcome evaluation should be planned before the program begins and should have specific time points for data collection, such as before and after the program has gone through a complete cycle.
Why Is This Step Important?
The purpose of Step 8 is to understand whether you have met the goals and desired outcomes established in GTO Step 2. Combined with the results of your process evaluation (GTO Step 7), this step will begin identifying program areas for improvement to help address those missed outcomes in an effort to improve the program while maintaining achieved ones. Outcome evaluation results can help you demonstrate the effectiveness of your program to your funders and other stakeholders.
How Do I Carry Out This Step?
In GTO Step 8, you need an evaluation design and a data collection and analysis plan, including a measurement tool (e.g., a pre-/post-survey), a target population to be measured (e.g., all the participants in the program), a timeline for when to collect the data (e.g., from the pre-/post-survey), a plan for entering the collected data (usually into a spreadsheet), and a plan for analysis to determine whether outcomes were achieved (e.g., the change from the pre-survey to the post-survey). Outcome evaluations can be complex and costly and are often intimidating for program staff. This guide is meant to assist with simple outcome evaluations. If you want to carry out more-complicated outcome evaluations, you may need to get help from a trained program evaluator.
A design is a term for the type of evaluation you will conduct. The type of design guides when you collect data and from which groups. For example, a simple and inexpensive design uses a questionnaire to collect data from program participants just before a program begins and after a complete program is completed (often called a pre-/post-). This design might be appropriate to assess competency as an outcome of a train-the-trainer program. Another type of design, called the pre-/post- with comparison group, compares program participants with a similar group not receiving the program during the same time period. This way, you can be sure that any changes taking place in the participants getting the program from pre- to post- were real and did not happen to all nonparticipants (i.e., if both groups improve the same amount, then the program did not have an effect). This improves confidence that differences were due to the program and not to something else. That is why this design is a stronger way to evaluate whether the program led to changes in knowledge, attitudes, or behaviors over time. However, this design is more complicated, so you may want to consult a program evaluator. Finally, sometimes you may only be interested in how participants did in a program at the end. Surveying participants only at the end is called a post-only design. It is the easiest to do, but it is the weakest type of evaluation because you have no information about how much change occurred before the program started and it only includes participants who completed the program.
In outcome surveys, individual survey questions are often grouped together into topical categories called scales. For example, a knowledge scale may include several questions assessing different types of knowledge. The question responses can be combined to form a scale. Then, the analysis of these data can also be done easily by scoring each scale, calculating the average for the group surveyed, and then comparing the pre- and post- scores.
Tip 8-1. Examples of survey questions for different outcome areas
Outcome Area | Survey Question(s) | Response Options |
---|---|---|
Perceived level of preparedness |
How well prepared do you feel your household is to handle a large-scale disaster or emergency? |
Well prepared, somewhat prepared, or not at all prepared |
Preparedness kit |
|
Yes, no, don’t know/unsure |
Communication plans |
|
Communication with family:
Communication with authorities:
|
Evacuation plans |
|
Yes, no, don’t know/unsure Reason if noncompliance: What would be the main reason you might not evacuate if asked to do so?
|
Organizational ties |
|
Yes, no, don’t know/unsure |
In the past year, have you
|
Yes, no, don't know/unsure |
In addition, the Final Report of the 2007 Public Health Response to Emergency Threats Survey (PHRETS) (PDF) , conducted in Los Angeles County by the David Geffen School of Medicine at UCLA, includes all the PHRETS questions on individual, workplace, school, and daycare preparedness and shelter in place, evacuation, and communications plans.
Tip 8-2. Data collection methods for measuring desired outcomes
Methods | Pros | Cons | Cost | |
---|---|---|---|---|
Surveys | Self-administered surveys |
|
|
Low to moderate |
Telephone surveys |
|
|
Moderate to high, depending on number of surveys to complete | |
Face-to-face structured surveys | Same as self- administered, but you can clarify responses | Same as self-administered but requires more time and staff time | High | |
Recorded interviews |
|
|
Low | |
Open-ended interactions | Open-ended face-to-face interviews | Gather in-depth, detailed info. Info can be used to generate survey questions |
|
|
Open-ended questions on a written survey |
|
|
Low | |
Focus groups |
|
|
|
|
Other | Observation (of children, parents, program staff) |
|
|
Low to moderate if done by staff or volunteers |
Source: Adapted from Hannah, McCarthy, and Chinman, 2011.
Tip 8-3. Reporting evaluation results for different audiences
Obviously the most important reason we evaluate what we’re doing is because we want to know whether we’re having an impact. However, sharing our results in simple, meaningful ways can have other useful impacts as well. Keep in mind that different groups of stakeholders may be interested in different types of information. The general public may be less interested in lots of data than funders or local policymakers are. In this tip we have included some different ways that information might be reported for different audiences.
Funder | Information of interest | Example of reporting method |
---|---|---|
Funder | Whether the program is working |
|
Community members |
|
|
Agency staff | Whether the program is working. How the program can be improved | Detailed report with executive summary of findings |
Tools Used in This Step
The Outcome Evaluation Planner Tool
The Outcome Evaluation Planner Tool will help you plan your Outcome Evaluation.
Instructions
This tool will help you plan how to carry out your outcome evaluation. While this tool allows you to create your own outcome evaluation survey items, we recommend that, whenever possible, you choose measures that already exist and have been used to evaluate programs like yours. Some programs have their own outcomes survey. With this tool, you can also choose your design (i.e., pre-/post-, pre-/post- with comparison group).
- Make as many copies of the tool as necessary so that you have a row for each of your program’s outcomes.
- Review the desired outcomes statement from the SMART Desired Outcomes Tool you completed in GTO Step 2, and copy each desired outcome into the first column.
- Check the appropriate box in the Evaluation Design column to indicate your choice of evaluation design for each outcome.
- Next, identify the scales and/or existing or new questions that you will use to measure each of your desired outcomes statements. See resources in this guide, literature, and manuals for programs like yours for examples.
- Select a measure that can be used to assess each desired outcome. Enter this in the next column.
- In the next column, indicate from where you are pulling the scale or questions (for example, your program’s survey).
- In the last column, enter “All” if you are using all the items in the scale, or enter the number of items from a scale that you will use.
- With this tool completed, you can construct your outcome survey questionnaire. Add any additional questions, such as demographics or level of participation or satisfaction, that you also decide to measure.
Example
- Completed by: Project team/evaluator
- Date: April
- Program: ROAD-MAP
Desired Outcome | Evaluation Design | Scale Name/Questions | Source of Scale/ Questions | Items to Include |
---|---|---|---|---|
To increase the number of program participants indicating that they possess a 7-day household emergency water supply by 20% from baseline to follow-up (3-month period) |
|
Behavioral Risk Factor Surveillance System (BRFSS) General Preparedness Module
|
BRFSS, 2012 |
Question 2 from BRFSS General Preparedness Module |
To increase the number of program participants who regularly take prescription medication who possess a 7-day extra supply by 15% from baseline to follow-up (3-month period) |
|
California Health Interview Survey (CHIS) Emergency Preparedness
|
CHIS, 2009 |
Questions EM1 & EM2 |
The Outcome Evaluation Summary Tool
The Outcome Evaluation Summary Tool will help you interpret the results of your Outcome Evaluation.
Instructions
This tool helps interpret your survey data to see how much change you achieved on the desired outcomes. With this tool you can summarize your pre- and post- scores for your program participants and a comparison group (if you have one).
- Make as many copies of the tool as you need.
- Copy over your measures (scales of questions) from the Outcome Evaluation Planning Tool.
- Enter the results from your survey instruments in the remaining columns.
- If you have pre-program data, calculate the pre-program averages for the participants in two parts:
- First, apply the scoring rule on each scale for each participant.
- Second, calculate averages across all participants for each scale or item. For each scale, add the scale scores for each participant together, then divide by the number of participants. Place this final number into the Pre-Program Score column of the tool in the space labeled “Program.” Do the same for single items.
- Repeat the same procedure to generate post-program averages, if you have post-program data.
- If you have data for a comparison group, you will need to calculate pre- and post- averages for each scale and enter them into the tool in the space labeled “Comparison” (below the participants’ scores) or write in “Not applicable” (N/A).
- For each scale, calculate the percentage change from the pre- to post- averages:
- Subtract the pre-program average from the post-program average.
- Divide the result by the pre-program average.
- Convert to a percentage (you can do this by multiplying by 100).
- If you used a comparison or control group, calculate the percentage change for that group as well (for each scale), and enter it in the appropriate column.
- Briefly summarize the meaning of each result in the Interpretation column. For example, if there is a 50-percent increase in knowledge among the program participants but only a 10-percent increase in the comparison group, you might interpret this greater positive change as a result of the program.
Example
- Completed by: Project team/evaluator
- Date: August
- Program: ROAD-MAP
Item/Scale Name | Pre-Program Score | Post-Program Score | Percentage Change [(post- minus pre-) divided by pre-] | Interpretation | ||||||
---|---|---|---|---|---|---|---|---|---|---|
|
|
|
|
The number of program participants who have a 7-day household water supply increased by 33.9% from pre-program evaluation to follow-up. | ||||||
Peer-Mentored Preparedness (PM-Prep) Preparedness Index (Q1, Q6, and Q7) |
|
|
|
The number of program participants with a household communication plan increased by 100% from pre-program evaluation to follow-up. |
||||||
|
|
|
|
This is not an outcome measure, but rather an introductory question before asking the two outcome questions. |
||||||
|
Program: 14% | Program: 16% | 14.2% | The number of program participants who regularly take prescription medication who have a 1-week extra supply increased by 14.2% from pre-program evaluation to follow-up. |
- Program: Scores for the group of participants who received the program.
- Comparison: Scores for the group of participants who did not receive the program.
When these are complete, you will be ready to undertake program improvement using GTO Step 9.
Step Checklist
When you finish working on this step, you should have:
- Completed the Step 8 tools
- Identified the questions you want the evaluation to answer
- Chosen the measures you want to collect
- Developed methods to use in the outcome evaluation
- Developed and finalized a plan to put those methods into place
- Conducted the outcome evaluation (collected your data)
- Analyzed data and interpreted your findings
- Reported your results
Before Moving On
You should have some idea at this point whether you have actually achieved your desired outcomes. The final two steps in this process will help you reflect on what you’ve done, fine-tune your work before you conduct your program again, and bring together a set of ideas about how to sustain your work.