Download Free Electronic Document

FormatFile SizeNotes
PDF file 0.4 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Congress may soon make major changes in federal workforce training programs. However, the proposals being considered are vague about accountability: to whom should programs be accountable, which outcomes should be monitored, and how should performance data be used? Research carried out by the National Center for Research in Vocational Education (NCRVE) has been exploring these questions in the context of vocational education for the last several years. The research suggests that the way these questions are answered will affect the delivery of training in significant ways.

The Changing Policy Context

The 1990 Carl D. Perkins Vocational and Applied Technology Education Act (known as Perkins II) established an accountability system for vocational education based on "outcomes"—factors such as academic skill gains, job placement, and program completion. States were required to establish systems of standards and measures of performance that local programs could use as a basis for program improvement. For the past four years, the states have been developing an information infrastructure to provide the appropriate outcome data. Vocational educators and policymakers have high expectations that vocational training programs will improve as a result.

However, we may never learn if this outcome-based model for program improvement works, because members of Congress are likely to abandon Perkins II before these systems are fully operational. Current congressional proposals would reduce the federal government's role by consolidating job training efforts into a smaller number of programs that have fewer prescriptive guidelines. The focus of recent legislative initiatives is on shifting authority to the states and reducing costs; relatively little attention is given to accountability. The risk is that workforce training nationwide may suffer if neither the federal government nor the states continue to develop accountability mechanisms.

To Whom Should Programs be Accountable?

Current legislative proposals for workforce training differ in the emphasis they place on individuals, communities, and states in the accountability process. Since these three institutional "actors" can have different goals, their relative role in accountability can change the way the system behaves. At one extreme, a voucher approach makes individual participants the agents of accountability. If participants are not satisfied with a program or provider, they can "vote with their feet" by taking their training vouchers elsewhere. At the other extreme, many block grant proposals give state agencies the responsibility for insuring quality and protecting students from unsatisfactory training programs. When state agencies hold the accountability reins, they vote with their dollars by terminating poor-quality programs. An intermediate approach assigns decisionmaking authority to local community councils who, in theory, are more responsive to the needs of the local business community. Under these conditions, accountability reflects a negotiated balance between the needs of employers, providers, and students.

Placing the responsibility for program accountability with individuals, states, or local communities will have different effects on programs because these three groups have different goals. Individual students place greater emphasis on their personal employment goals; states tend to be more respectful of the concerns of schools and districts; and local communities are more responsive to the needs of local employers. When personal goals, institutional goals, and employers' goals conflict, as they often do, the assignment of responsibility for accountability to one group or another will make a difference. For example, students' personal employment goals can be at odds with long-term community or state goals for the workforce. Administrators in one school studied by NCRVE researchers terminated a child care worker training program with high enrollment because the program led women into traditionally low-paying jobs with little chance for advancement. The women enrolled in the program may not have made the same choice.

There are other consequences of assigning accountability to individuals or states. Voucher holders with information about program quality will exercise their accountability functions (by enrolling or withdrawing) relatively quickly and decisively; in contrast, states can take much longer to terminate a marginal program. Schools tend to defend their teachers and programs, and states are reluctant to take decisive actions against schools, especially if they believe that improvements may be possible. On the other hand, if accountability is vested in individual voucher holders, the pressure to eliminate programs may hold sway over the pressure to improve them.

Which Outcomes Should Be Monitored?

The current crop of federal legislative proposals for workforce training gives insufficient attention to which outcomes should be monitored. Regardless of who is responsible for monitoring quality, valid measures of performance must be identified. The decision about which outcomes should be given priority—labor force outcomes, occupational skills, or program completion—affects the way the system operates and can create incentives for behaviors that threaten the quality of the training.

Since the goal of job training programs is employment, the most natural outcomes to monitor are those associated with entry into the labor force. Labor force outcomes include initial employment, wage levels, continuing employment, advancement, and employee and employer satisfaction. However, building accountability around employment can lead to invalid conclusions and incorrect actions. For example, fluctuations in the local labor market pose a difficult measurement problem. Declines in placements might not indicate program failure so much as an economic downturn. Emphasizing placement as a measure of program success may also affect:

  • Who is selected for training—those perceived to be most "employable" or those most in need
  • Which occupations are the focus of training—those that are easiest to "train to" or those demanding the most difficult learning
  • Which skills are emphasized—those that satisfy employers' initial hiring demands or those that bolster employees' long-term career potential.

The long-term health of local and state economies might be better served by a system that focuses on skills, rather than labor market success, as the essential outcome. In fact, the recent development of national skill standards establishes a framework that could be used as such a basis for accountability. In the long run, economic productivity might be enhanced more by emphasizing general workforce skills and competencies, such as time management, teamwork, and understanding of technologies and systems, rather than initial labor market outcomes. Furthermore, defining success in terms of skill improvement rather than skill attainment may create incentives to serve different students. For example, if improvement is rewarded, programs may emphasize service to those with the lowest skills, believing they have the greatest potential to grow. In contrast, if attainment is rewarded, those with the highest skills may appear more attractive because they are more likely to meet completion criteria.

Another set of performance outcomes can be defined in terms of program participation. Useful measures include program and course enrollment, continuation, and completion. Focusing on program participation provides more immediate information about the quality of classroom services and increases the likelihood that program deficiencies can be identified and improved. It also makes it easier to address concerns about equity of access and services. In the long run, the workforce preparedness system is only as good as its programs, so attention to program quality is important. At the same time, participants' goals do not always match program goals, and measuring outcomes only in terms of program completion can lead to incorrect inferences, as well. In particular, some students learn the skills they need to find employment before they complete the coherent set of courses that define a program. These "non-completers" who leave to take a job may be satisfied with the outcome even though they have not attained the desired result—program completion—from the point of view of the program.

It is not easy to say which outcomes are the right ones to monitor. Perkins II encourages states to include many outcomes rather than focusing narrowly on employment, skills, or program participation alone. Given some flexibility in their choices, states have opted to include more rather than fewer measures. Assessment professionals strongly endorse the use of multiple measures that reflect the objectives of the program.

How Should Outcome Data Be Used?

The authors of Perkins II believed that students would be served best if performance outcomes were used to make programs more responsive to student needs. This emphasis on local responsibility for program improvement was one of the distinguishing characteristics of the legislation. Only if programs were unable to improve themselves did states step in—to offer guidance or, if necessary, to terminate the program. The current legislative proposals for workforce training are vague about the use of outcome data, and some even support a punitive approach. They assume that outcome data are to be used by the state to assure program quality; if the data show deficiencies, the state can revoke funding. In failing to mention program improvement, however, congressional proposals are overlooking one of the most potent uses of performance data. And they ignore the fact that, in practice, programs are rarely terminated. The costs—both political and economic—are simply too great. This dichotomous approach—to fund or not to fund—may be satisfying on a visceral level, but it does not constitute good policy.

NCRVE research has found that local accountability systems can be effective tools for program improvement. In fact, recent research recommends making a broader scope of information available to local decisionmakers so that they can become agents of reform. Outcome measures alone do not provide information about the causes of problems—only about their effects. Information about why programs are failing to meet their goals is far more useful for purposes of reform than counts of completers and placements.

The research also calls for increasing the expertise of people who are selecting, collecting, and analyzing data and then using the information for program decisions. Studies of school reform have shown that data collection is seldom a catalyst for change, particularly mandated data collection. Schools tend to use data only to signal their compliance with regulations, not as the basis for informed program improvement. And vocational educators, like their counterparts in general education, have limited experience with the use of data to manage or improve programs.

Making Accountability Work

Research conducted by NCRVE has been examining the broad issue of accountability, and specifically the effects of the provisions of Perkins II on workforce training, for the last four years. As suggested here, three conclusions are germane to the congressional debate. First, workforce training programs should be accountable to multiple constituents— students, the local business community, and the state. Shifting the emphasis to a single constituency creates unbalanced incentives that could undermine the quality of training. Second, it matters a great deal which program outcomes are monitored—labor force outcomes, occupational skills, or program completion. The choice of outcomes affects the way the system operates and can distort system performance. Multiple outcomes measures as well as data about instructional processes are preferred. Third, producing performance data does not guarantee that such data will be used effectively. Provisions must be made to help state and local authorities and program administrators use information in the most effective way: that is, to make beneficial choices, to strengthen successful training programs, and to eliminate unsuccessful ones.


Hill, P. T., and J. Bonan (1991). Decentralization and Accountability in Public Education. R-4066-MCF/IET. Santa Monica: RAND.

Stecher, B. M., and L. M. Hanser (1993). Beyond Vocational Education Standards and Measures: Strengthening Local Accountability Systems for Program Improvement. R-4282-NCRVE/UCB. Santa Monica: RAND.

Stecher, B. M., M. L. Rahn, L. M. Hanser, K. Levesque, B. Hallmark, E. G. Hoachlander, D. Emanuel, and S. G. Klein (1994). Improving Perkins II Performance Measures and Standards: Lessons Learned from Early Implementers in Four States. MR-526-NCRVE/UCB. Santa Monica: RAND. (Also published as NCRVE MDS-732.)

Stecher, B. M., L. M. Hanser, M. L. Rahn, K. Levesque, S. G. Klein, and D. Emanuel (1995). Improving Performance Measures and Standards for Workforce Education. MDS-821. Berkeley: National Center for Research in Vocational Education.

Research conducted by

This report is part of the RAND issue paper series. The issue paper was a product of RAND from 1993 to 2003 that contained early data analysis, an informed perspective on a topic, or a discussion of research directions, not necessarily based on published research. The issue paper was meant to be a vehicle for quick dissemination intended to stimulate discussion in a policy community.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.