Evaluate Outcomes of the Program
Evaluate Outcomes of the Program
Evaluating the implementation of your program (process evaluation) is important, but ultimately you want to know whether you are improving outcomes for the children and families you serve. Planning and completing an outcome evaluation will help determine whether the program achieved the goals and desired outcomes that you set forth in Step 2.
Overview of outcome evaluation
In the previous steps, you developed a plan for implementing your program. Step 7 helped you plan to monitor your program's implementation. Now it is time to determine whether your program has had the effects that you were hoping for. Combining the process evaluation developed in Step 7 with the outcome evaluation in Step 8 will give you a complete picture of your program's impact.
The type of outcome evaluation we describe here examines whether participants in the program are achieving the outcomes that the program has previously achieved, as well as those outcomes that you have identified as priorities. This is sometimes referred to as outcome monitoring (Chen, 2005). This type of outcome evaluation is invaluable for program planning and management.
Note that this is different from effectiveness or efficacy evaluation, which uses rigorous research designs—a common one is the randomized control trial—to attribute outcome changes to the program by comparing participants' outcomes with those of a comparison group.
Outcome monitoring is good at helping you manage your program, especially an evidence-based program. However, it is not able to produce new evidence that an untested program works. This is because it does not employ research methods using a comparison group, as outlined by the criteria that the U.S. Department of Health and Human Services (DHHS) uses to designate untested home visiting programs as "evidence-based." For more information on this type of evaluation, see Chen (2005), and for more information on the DHHS criteria for evidence-based programs, see the HomVEE study rating descriptions.
The tasks in this step will help you:
- Specify the evaluation questions being asked
- Identify what should be measured, for whom, and how often
- Plan the analysis or comparison to be used
- Develop and finalize a plan to put those methods into place
- Conduct the outcome evaluation
- Analyze the data, interpret the findings, and report your results.
The following sections will walk you through a brief description of each of these tasks. To help you see where you are going, take a look at the Outcome Evaluation Planning Tool, which is organized by the steps outlined above. The tasks described in the next sections will help you fill in this tool. Once you have completed the tool, it will serve as the outcome evaluation plan for your program.
Instructions for using the Outcome Evaluation Planning Tool
The instructions for completing the Outcome Evaluation Planning Tool are presented here in a different format than the format we have used for the instructions in previous steps. In the upcoming sections of this chapter, the instructions are broken down by topic, corresponding to the six tasks listed above. As you read through the tasks in this step, we'll give you the information you need to fill in each of the columns in the tool. At the end of this chapter, we will also show you how Townville filled out each column.
First, make as many copies of the tool as you have desired outcomes.
Start by writing each of your desired outcome statements in each table in the space provided in the far left-hand column of the tool. Remember, you generated those in Step 2. You will fill in the information called for in the tool (evaluation question, measures, analysis approach) for each one of your desired outcomes. At the end of each topic section, look for the instructions that begin with Using the Outcome Evaluation Tool.
Next, specify the evaluation questions you want to answer.
The evaluation questions you specify are likely to be specified at the population level and the individual level. Examples of population-level questions are those at the community level or the geographic unit where the needs assessment data were measured in Step 1. Your goals from Step 2 may be stated at the population level, such as "Reduce the rate of child maltreatment in the county." Hence, you will definitely want to continue to monitor the population-level data that you examined in Steps 1 and 2 to be able to assess the question "Did we reduce the rate of child maltreatment in the county?" How well the population-level data captures the impact of your home visiting program depends in part on the scope of your home visiting program. If your program targets only a subset of families in the population, such as teen mothers, the population-level data will contain a combination of families served by your program and families that are not. This might make it more difficult to see the effects of the program.
It is important to ask evaluation questions that the data can accurately answer. For example, if reducing child maltreatment is a goal and the target population is teen moms, there would have to be a proven correlation between teen moms and child maltreatment. Be careful to not make assumptions that the data do not support or formulate goals that can't be measured because of lack of data.
In order to hone in on the effects of your program, you will also want to collect individual family-level data from the families you serve. Asking questions at the family level establishes whether participants in the program really are achieving the outcomes that the program has identified as priorities, such as not getting Child Protective Services investigations, receiving recommended immunizations on time, parents quitting smoking, families using car seats, etc.
Start by revisiting the desired outcomes that you defined in Step 2. This will guide what you actually should plan to assess. Take a look at your completed Goals and Desired Outcomes Tool to find the desired outcomes that you identified for your program. You may have identified multiple desired outcomes for each goal. These will all serve as inputs into your outcome evaluation plan.
For example, Townville selected the following goals and desired outcomes in Step 2:
- Townville's program goals are:
- Reduce child abuse and neglect
- Reduce hospitalizations for injury.
- The desired outcomes associated with these goals are:
- Reduced community rates of child maltreatment (a population-level outcome)
- Improved parent understanding of adequate in-home safety procedures (an individual-level outcome).
- For each of their desired outcomes, Townville identified a way to measure that outcome:
- Reduced rates of child maltreatment will be measured using county Child Protective Services (CPS) reports
- Improved parent understanding of adequate in-home safety procedures will be measured using a parent survey.
Townville will thus fill out two tables in the Outcome Evaluation Tool, specifying a question for each desired outcome.
Next, identify how the answer to that question will be measured.
Review the Goals and Desired Outcomes Tool you filled out in Step 2. Use this to fill in the second row of the Outcome Evaluation Planning Tool with how you decided to measure the identified goals and desired outcomes.
You may have identified multiple desired outcomes for each goal, or even multiple measures for each desired outcome. You may have identified that you need to administer a survey or conduct focus groups to assess your desired outcome. Take a look back at Step 1 to identify some commonly available data sources, and also to review Tip 1-3: Collecting your own needs and resources assessment data.
Next, decide whom to assess.
Once you have determined what measure will help you answer your question for each outcome, you will need to decide whom you'll collect data from and how often to collect it. It should be fairly simple to determine whom you will assess:
- If you are conducting intervention activities with 100 families, then you should be able to assess all of the families in the program.
- If you have decided to use a desired outcome that cannot be measured for each participating family (for example, community-wide measures like rates of child maltreatment as in the case of Townville), then you will need to assess the entire community.
- If you are conducting a larger effort, it may not be possible to assess every participant, so you'll need to measure outcomes on a smaller subgroup, called a sample, of the program participants. Keep in mind that the larger and more similar the sample is to the overall group of participants, the more confidence you can have about stating that the results of your assessment apply to the whole population.
- If you plan to evaluate a sample, how you chose them can affect your outcomes. Choosing at random is best and will affect your outcomes the least. You may want to consult with an expert on how to choose a sample for your evaluation.
Once you've determined whom you plan to assess, enter that information in the third row: Will you be assessing the entire population, the entire group of participants or a sample of participants? If you plan to assess a sample of participants, specify how many you plan to assess and how you plan to choose them.
Next, decide how and when outcomes will be measured.
This will depend on the questions that you have chosen. In most cases, we recommend you do at least a pre- and post-test measurement for at least one cohort (group of participants who joined the program at about the same time). If you have the resources, it is very useful to measure outcomes for the participants again after several months to determine whether the outcomes are sustained.
For example, you can build plans into your evaluation to interview families when they enter the program, and then to interview families again 3, 6, or even 12 months after they have finished the program to see whether your desired outcomes have continued or dropped off over time.
You may also consider measuring outcomes for another cohort after the program has gone through a couple of program cycles in order to see whether you are still maintaining outcomes or whether outcomes have even improved.
Once you've determined the timing of measurement and how often you plan to measure outcomes, enter that information into the fourth row of the tool. If you plan to measure outcomes for more than one cohort, specify that here.
Next, choose the analysis or comparison you will use.
Now you are ready to choose the analysis or comparison that will answer your evaluation questions. First, consider whether you are examining population-level outcomes or individual-level outcomes. This will influence what sort of comparisons you will use to assess the impact of your program.
In general, you will either use a difference (comparing an average score to another average score) or percentage change to assess whether there has been a meaningful change in your measure of outcomes. For example, Townville selected the question "Are child maltreatment rates lower?" and they analyzed this by taking the difference between county-level maltreatment rates before the program started and county-level maltreatment rates six months later.
Once you've selected the analysis or comparison you will use, enter the information into the second to last row of the Outcome Evaluation Planning Tool.
Finally, establish benchmarks for what you want to achieve.
Achieving desired outcomes tells you that your program was a success. Establishing thresholds in your desired outcomes (i.e., 80 percent of program participants will achieve 75 percent or higher on parenting skills post-test) may seem arbitrary; however, they help frame your thinking about the evaluation, and what you will consider to be a "success." There are several methods to use in setting meaningful benchmarks:
- First, if you are using an evidence-based program, you can set objectives based on what the program has achieved previously in other communities. Take a look at the outcome information for your selected program to determine whether there are published program evaluations that might have some guidance on what the program has achieved previously.
- Second, you can use your own experience with a target group to set realistic desired outcomes. For example, the outcomes achieved the first time a program is implemented could be viewed as a starting point, and the next time, the outcomes might be even better.
- Third, you can use national or statewide archival data to give you a criterion toward which to aim (e.g., do you want to surpass the national rate in your community?).
Tool 8-1. Outcome Evaluation
View in New Window Download Microsoft Word Version
Townville Example 8-1. Townville's Outcome Evaluation Plan
As with the process evaluation plan, Townville's community coalition worked closely with the national office for the XYZ model to establish their outcome evaluation plan. Because Townville had two goals for the program, they filled out two Outcome Evaluation Tools.
Townville used the goals and outcomes that they identified in Step 2 to begin filling out this worksheet. They worked with CPS to determine how frequently they could obtain data from the county CPS reports, and used that information to decide when CPS data will be assessed. They also worked with their home visitors and agency staff to decide when and how they could administer a survey to participating parents on proper in-home safety.
Townville looked at how much the XYZ program was able to reduce child maltreatment in other studies. In general, the program was able to reduce child maltreatment by about 10 percent. However, since this is the first year of implementation, the Townville coalition decided to set a more moderate goal of 5 percent reduction. For the parent knowledge of in-home safety measure, the Townville coalition was more confident. This is material that all home visitors will teach using the XYZ curriculum, so they hoped that parents would average 90 percent correct when answering questions about home safety immediately after the program, and that this would decrease to no less than 85 percent six months after the program ended.
Below is Townville's outcome evaluation planning worksheet for Goal #1.
Question related to desired outcome |
Are rates of child maltreatment lower? |
---|---|
How will it be measured? |
County CPS reports on the number of child maltreatment cases per thousand children |
Who will be assessed? |
Entire population |
When will it be measured? |
Before the first families are served by the program and 6 months after the program ends for the first cohort Reassess rates annually |
What analysis or comparison answers the question? |
Subtract the rate of child maltreatment 6 months after the program started from the rate of child maltreatment before the program started. If rates improved, this should be a positive number. |
What is the benchmark you would like to reach for this outcome question? |
Rate of child maltreatment decreases by 5 percent |
Below is Townville's outcome evaluation planning worksheet for Goal #2.
Question related to desired outcome |
Do more parents know about proper home safety procedures? |
---|---|
How will it be measured? |
Parent survey |
Who will be assessed? |
Parents participating in the program |
When will it be measured? |
Intake Last home visit 6 months after the program ends |
What analysis or comparison answers the question? |
Identify the % of home safety questions parents answer correctly at intake. Compare this to the % correct at the last home visit and 6 months after the program ends. If the % of correct answers increases, parents know more about adequate in-home safety. |
What is the benchmark you would like to reach for this outcome question? |
Average of 90% correct after the last home visit, no less than 85% correct 6 months later |
Conduct your outcome evaluation
Now it is time to implement your program and conduct the process and outcome evaluations you've planned. Regardless of the methods you've chosen, you'll need to decide who's going to collect the data you need. If your organization doesn't have experience collecting this type of information, you might want to consider hiring someone who does. Having an outside person or organization collect the data would help to reduce the likelihood of biased results.
Protecting participants
Important issues come up about protecting participants in data collection regardless of the method you've chosen. Here are several critical considerations:
- Confidentiality: You must make every effort to ensure that the responses of the participants will not be shared with anyone but the evaluation team unless the information reveals imminent intent of someone to harm themselves or others (a legal statute that varies by state). Confidentiality is honored to protect the privacy of the participants. Common safeguards include locking the data in a secure place and limiting the access to a select group, using code numbers in computer files rather than names, and never connecting data from one person to his or her name in any written report (only report grouped data such as frequencies or averages). Tell participants not only that their answers will be kept confidential but that the services they receive in the future will not be determined or affected by their answers in any way. (Participating agencies must take this seriously.)
- Anonymity: Whenever possible, data should be collected in such a way that each participant can remain anonymous. This means their responses to the evaluation are kept separate from identifiable information like name and contact information. Again, this will protect the privacy of the participants. If you plan to match subjects on a pre- and post-test measure, you'll have to come up with some sort of non-identifying way to match surveys such as creating unique identification numbers or codes for each participant, for example. Also if you want to link the responses from the outcome evaluation to other data, e.g., process evaluation data such as the number of sessions attended, then you may be limited in doing that without a plan in place ahead of time. Make sure you tell the participants that their data will be kept confidential and anonymous. They will be more likely to give true information.
- Institutional review: If you are planning on using these data for internal purposes, you likely do not need to go through an institutional review board (IRB; a committee formally designated to review research involving people). You would only need IRB review if you were planning on using the data you collect in published research reports.
Analyze the data, interpret the findings, and report your results
Analyze the data
Once you've gathered your data, the next step involves analyzing it. Just as there are quantitative and qualitative data collection methods, there are also quantitative and qualitative data analysis methods. When using quantitative data collection methods, such as surveys, it is common to use quantitative data analysis methods, such as comparing averages and frequencies. It may be worthwhile to consult an expert in data analysis procedures to ensure that you are using appropriate techniques. If you are using evaluation measures from the program developers, then they may have scoring criteria or tell you what values are expected from program participants so that you can assess whether the program is having the intended effect.
Interpret the findings
The actual conclusions about your ultimate impact require you to review data on your process measures and desired outcomes to see whether you are actually changing the behaviors you set out to change and by how much. There's a lot to think about. For example, you may have a well-implemented program but still not achieve the positive outcomes you'd hoped for. In this example you may find that only those families who had a seasoned home visitor that had all the requisite training and supervision achieved positive outcomes. This is a good example of how process evaluation data can help you interpret your findings. Perhaps you haven't provided enough dosage or length of time for your program to have the desired impact. Regarding the community indicators, for example, you may find no changes in neglect, but also see from your process evaluation results that only 35 families received the home visiting program in the past year, making it difficult to demonstrate changes at the population level. Interpreting your results in a thoughtful way helps you see what's working and what you need to change.
The conclusions that you come to using the data that you collect will help you develop a plan for continuous quality improvement (CQI), discussed in more detail in Step 9.
Report results
Obviously, the most important reason we evaluate what we're doing is because we want to know whether we're having an impact in the lives of children and families that we're working with. However, sharing your results in simple, meaningful ways can have other useful impacts as well. Keep in mind that different groups of stakeholders may be interested in different types of information. Parents may be less interested in lots of data than funders or local policymakers. In Tip 8-1, we have included some different ways that information might be reported for different audiences.
Tip 8-1. Reporting Evaluation Results for Different Audiences
Stakeholder |
Information of interest |
Example of reporting method |
---|---|---|
Funder |
Whether the program is working |
Detailed report with executive summary of findings Grant application (if applicable) |
Coalition members |
Whether the program is working How the program can be improved |
Executive summary of findings and accompanying presentation |
Agency staff |
Whether the program is working How the program can be improved |
Detailed report with executive summary of findings |
Parents |
How the program is impacting children and families in the community |
Flyer, web page |
General public |
How the program is impacting children and families in the community |
Flyer, web page |
Checklist 8-1. Completion of Step 8
When you finish working on this step, you should have done the following:
- Identified questions you want the evaluation to answer.
- Chosen the measures you want to collect.
- Developed methods to use in the outcome evaluation.
- Developed and finalized a plan to put those methods into place.
- Conducted the outcome evaluation (collected your data).
- Analyzed data, interpreted your findings.
- Reported your results.
Before moving on to Step 9
You should have some idea at this point whether you have actually achieved your desired outcomes. The final two steps in this process will help you reflect on what you've done, fine-tune your work before you conduct your program again, and bring together a set of ideas about how to sustain your work.