Establishing a Research and Evaluation Capability for the Joint Medical Education and Training Campus

by Sheila Nataraj Kirby, Julie A. Marsh, Harry J. Thie

This Article

RAND Health Quarterly, 2011; 1(2):8

Abstract

In calling for the transformation of military medical education and training, the 2005 Base Realignment and Closure Commission recommended relocating basic and specialty enlisted medical training to a single site to take advantage of economies of scale and the opportunity for joint training. As a result, a joint medical education and training campus (METC) has been established at Fort Sam Houston, Texas. Two of METC’s primary long-term goals are to become a high-performing learning organization and to seek accreditation as a community college. Such goals require a clear model of organizational improvement with well-defined metrics for measuring its performance and using research and evaluation to assess and improve that performance. Lessons learned from a review of practices at institutions with similar missions—such as community colleges, corporate universities, the UK’s Defence Medical Education and Training Agency, and other federal agencies, such as the Veterans Health Administration—establish a clear need for an office of institutional research to help METC attain its organizational goals. They also provide useful recommendations regarding the METC office’s structure, scope, and governance.

For more information, see RAND MG-981-OSD at https://www.rand.org/pubs/monographs/MG981.html

Full Text

The 2005 Base Realignment and Closure (BRAC) Commission report recommended relocating basic and specialty enlisted medical training to Fort Sam Houston, Texas, to take advantage of economies of scale and facilitate the opportunity for joint training. To fulfill the BRAC recommendation, a joint medical education and training campus (METC) was established at Fort Sam Houston and became officially operational on June 30, 2010, although its initial training course, Radiography Specialist, began in April. Other courses will be phased in over several months. METC will consolidate most of the medical enlisted training currently being conducted at several military installations at Fort Sam Houston. When it is fully established, it will be responsible for training more than 100 enlisted medical specialties and will be one of the world’s largest medical education and training institutions, with an annual throughput of more than 24,500 students, an average daily student load of more than 8,000, and a total of 1,400 faculty and staff members. The vision is for METC to become the nation’s leading military medical education and training institution and its stated goals are to capture best practices and achieve efficiencies in training and to transform itself into a high-performing, “learning” organization.

RAND was asked to provide technical and research assistance in several areas related to METC’s implementation, including the need for and feasibility of establishing a research and evaluation capability within METC—the focus of this article.

The study aimed to address two major research questions:

  1. Does METC need a research and evaluation capability?
  2. What lessons can be learned from institutions with missions similar to that of METC in terms of research and evaluation activities and the structure and scope of an office of institutional research (OIR)?

We discuss each of these issues in turn, providing a description of our data, approach, and findings.

Does METC Need a Research and Evaluation Capability?

To answer this question, we took as a starting point METC’s avowed long-term goals of becoming a high-performing organization and seeking accreditation. To understand the role that research and evaluation play in high-performing organizations, we examined the literature on high-performing organizations and models of organizational improvement that such organizations typically implement. We focused on the framework established by the Malcolm Baldrige National Quality Award (MBNQA) program, both because it is arguably the nation’s most prestigious quality award and because it has been adapted for the education sector. We reviewed the process and requirements for accreditation at three institutional accrediting bodies—the Southern Association of Colleges and Schools, the Council on Occupational Education, and the Accrediting Bureau of Health Education Schools—whose missions seem best aligned with METC’s proposed purpose and structure.

Characteristics of a High-Performing Organization

Our review showed that high-performing organizations are focused on achieving results and outcomes and that, “to sustain a focus on results, high-performing organizations continuously assess and benchmark performance and efforts to improve performance” (GAO, 2004, p. 7). At the heart of most high-performing organizations is an organizational improvement model or methodology, such as Total Quality Management, Lean Production, Six Sigma, and the MBNQA framework. All these models emphasize measurement and analysis of organizational performance and the use of these results for organizational improvement. Thus, to support its goal of becoming a high-performing organization, METC will need to develop and sustain the capability to collect, organize, analyze, and use data on a variety of processes and outcomes to support innovation and performance excellence. In addition, it will need to review these indicators and its data and analysis systems on a periodic basis to adapt to new or changing environments and stakeholder needs.

Requirements for Accreditation

In the United States, accreditation is the primary stamp of approval indicating that an institution provides a legitimate education that meets standards of quality. Individual programs in an institution may also be accredited, which serves as an indicator that an educational program has met specific quality standards. Many of METC’s programs require program accreditation. Early on in planning, it was discussed that METC may one day seek formal accreditation as a degree-granting institution of higher education, accessible to members of all services.

Accreditation bodies are increasingly requiring programs and institutions to develop and implement quality-improvement plans and learning objectives and to provide credible evidence of the value added to student learning and subsequent workforce outcomes. The standards of the three accrediting organizations that we examined in detail also specify a variety of quality indicators that may be used for assessment and evaluation of occupational education programs, including (among others) graduation or completion rate, employment or placement rate, pass rate on professional licensure exams, employer satisfaction, participant satisfaction, and assessment of occupational skills and knowledge. Notably, several of these indicators (e.g., licensure exam pass rate, employer satisfaction, placement rate) require follow-up with program graduates and supervisors. In addition, several organizations provide benchmarks for certain indicators (such as graduation rates or licensure exam pass rates), typically determined by evaluating data from current organizational members or peer institutions.

While standards for all three organizations are relatively nonprescriptive, a handful of standards may present larger substantive issues for METC, in particular those related to governance structures, program length, and faculty credentials. Regardless, should METC seek accreditation in the future, it will need a research and evaluation capability to meet the accreditation requirement for institutional improvement plans, embedded assessment, and tracking of a variety of indicators.

What Lessons Can Be Learned from Institutions with Similar Missions?

To gather lessons learned from institutions with similar missions, we undertook three research tasks.

First, because METC is akin to a technical community college, we conducted a series of interviews with the heads of OIRs at nine selected colleges (chosen because they were large and more likely to adopt best practices or were widely regarded as exemplars) and representatives from four professional associations and networks with which these leaders were affiliated.

Second, because METC’s mission is more limited than that of a traditional college (in the sense that it will offer primarily technical training, rather than general education), we examined research and evaluation activities in organizations with missions similar to METC’s, including

  • corporate universities
  • METC’s counterpart in the United Kingdom, the Defence Medical Education and Training Agency (DMETA), which was established by the UK Ministry of Defence in 2003 to provide joint education and training for military medical personnel in the three services (British Army, Royal Navy, and Royal Air Force)
  • other federal agencies, such as the Veterans Health Administration (VHA), that invest considerable resources in training and development.

We reviewed extant literature on corporate universities. Because DMETA is closely allied with METC in terms of mission and focus, we conducted interviews with senior leaders there and reviewed materials that they provided. To understand what federal agencies were doing, we reviewed extensive work done by the U.S. Government Accountability Office (GAO) in this area. As part of the larger RAND project, we had conducted interviews with several senior leaders at the VHA. In those interviews, we also gathered information on that organization’s approach to research and evaluation.

Third, because all these organizations emphasize evaluation of training programs, we reviewed the strategic framework outlined by the GAO for designing and implementing training evaluations. This process incorporated best practices and offered several useful guidelines for METC in terms of undertaking evaluations of training programs.

Insights from Community Colleges and Four-Year Institutions

OIRs in higher-education institutions appear to have a range of functions: data management, internal reporting, external reporting, accreditation, and strategic planning, to name a few. In particular, respondents stressed the importance of organizing data collection and management, delineating a common terminology and data definitions, and establishing a centralized data warehouse. The majority of institutions reported conducting periodic surveys of students, including entry and exit surveys, student satisfaction surveys, and course evaluations. A few institutions (generally the four-year colleges and larger community colleges) reported participating in research and evaluations of programs or initiatives. These evaluations were often internal and were intended to inform policy review and improvements. Respondents also offered various lessons learned regarding the structure and governance of an OIR. (See the section “Recommendations for METC” for additional details.)

Insights from Corporate Universities

Corporate universities are separate entities that are primarily responsible for the development and implementation of training programs for members of the parent organization (Meister, 1994). METC closely resembles a corporate university in that it was set up to provide consolidated medical education and training to Army, Navy, and Air Force medical enlisted personnel to support the mission of the Military Health System (MHS). Our review of the literature revealed some common themes. First, although corporate universities differ in scope and function, measurement and evaluation of program effectiveness is always a key component, and corporate training leaders devote significant resources and attention to evaluation. Second, numerous authors noted that best-practice organizations build evaluation into training programs early by devoting considerable attention to evaluation issues in the program development and planning phase. Third, best-practice organizations focus on the customer in their evaluation efforts. Evaluators in these corporations consult with customers—broadly construed—to determine their requirements, which standards to set, and which outcomes to measure (Dixon, 1996). Fourth, evaluation in best-practice organizations is focused not simply on program improvement but on broader organizational improvement as well. Thus, evaluations are designed and implemented with strategic organizational goals in mind.

Insights from the Defence Medical Education and Training Agency

While DMETA provides training to a broad array of military medical personnel, both officers and enlisted, it is similar to METC in that it provides training to allied health professionals (e.g., radiographers, operating department technicians) and combat medical technicians or medical assistants. DMETA’s approach to training is guided by the Defence Systems Approach to Training model, which espouses a cyclical but iterative approach to training and emphasizes continuous evaluation throughout the process to allow adjustments to be made as and when needed.

All training is evaluated at DMETA. Internally, it is expected to measure through an after-action review an individual’s immediate outcomes and the learning transfer achieved by the training activity. In addition, the services are expected to validate changes in the behavior of the individual as a result of the training activity and how well the enhancement of knowledge, skills, or attitudes has prepared an individual for his or her role, as well as the contribution of training to the achievement of business or operational goals. However, DMETA has taken a more proactive stance in coordinating the external validation across the UK’s three military services, implementing an “early-warning feedback” form of external validation to identify and inform the requirement for a more rigorous and full evaluation of the training. The results of this full evaluation are fed into the system and used to check the accuracy of job performance requirements and to prove that the training being delivered still meets the operational requirements of the services.

In addition, DMETA must collect data and report annually on several performance indicators that are part of the Defence Balanced Scorecard. These indicators include, among others, the percentage of DMETA personnel who achieved the mandatory individual military training; the extent to which provision of initial and career, professional, and continuation training meet the requirements, standards, and timescales of the services; and whether the customer confidence index score is within the set target.

Insights from Federal Agencies

In 2004, the GAO reported on several federal agencies’ experiences and lessons learned regarding designing effective training and development programs. It noted that (1) evaluation of training was a key component of the training process at the organizations studied, and (2) the agencies had begun to use more comprehensive and sophisticated techniques for assessing the extent to which training and development programs increased employees’ knowledge and skills and enhanced individual and organizational performance. These techniques included pre- and post-testing, tracking changes in individual and program performance, and some limited use of return-on-investment analyses. Our case study of the VHA (see Kirby et al., 2010) showed that over the last several years, the VHA has spent considerable time and effort transforming itself into a high-performing learning organization, leveraging its National Center for Organizational Development, a central office that measures and monitors the organizational health of the VHA. In addition, it has strongly embraced continuous assessment, feedback, and redesign for the entire organization’s training and development programs and invested considerable resources in evaluation, performance measurement, and metrics for organizational improvement.

GAO’s strategic framework for designing and implementing effective training and development programs highlights the importance of integrating evaluation into each step of the training and development process because agencies need to be able to demonstrate how these efforts help develop employees and improve the agencies’ performance. We also reviewed various types and levels of evaluation.

Recommendations for METC

There is a clear need for a research and evaluation capability within METC that can further its current goal of becoming a high-performing organization and its future goal of being accredited. Such a capability can also help address the federal government’s increasing need to measure performance and cost-effectiveness and to provide evidence of the value added by training. At community colleges, such a research and evaluation capability is typically housed in an OIR, and this requires defining the structure and scope of such an office. Our interviews and literature reviews point to some useful recommendations in this regard.

Structure, Governance, and Staffing

In terms of structure, governance, and staffing, METC would benefit from the following guidance in establishing its OIR:

  • Position the METC OIR so that it reports to senior leadership and its director is part of the senior management team. This arrangement would help ensure that the office is taken seriously and that the director has credibility and the authority to access the needed data.
  • Ensure that the office is adequately staffed and that the staff have a mix of skills, including technical skills (e.g., statistics, information technology, programming), as well as broader enterprise knowledge and communication and interpersonal skills—particularly the ability to convey the meaning of the data collected on the training and development activities. Staffing in OIRs in the larger community colleges and four-year institutions tended to range from four to 14 full-time staff members; size is obviously a function of the scope of the office.
  • Collaborate with other METC departments, participate in institutional committees, and extend opportunities for all concerned stakeholders to provide input into the continuous improvement process and gain buy-in.
  • Encourage OIR staff to participate in professional associations and networks to learn about best practices and to foster personal and professional growth. In addition, ensure that the OIR director develops collaborative relationships with community colleges, corporate universities, other federal agencies (such as the VHA), and DMETA to learn about best practices in research and evaluation activities.

OIR Scope

In terms of scope, the following recommendations were relevant to METC’s mission:

Examine METC’s vision and goals and map them against the types of data needed to measure progress. Then, examine the institutional structure within METC to delineate the roles and responsibilities of the various offices to avoid both duplication of effort and the overlooking of essential functions.

Consider the following, among other functions, when defining the scope of the OIR:

  • Build a centralized data warehouse to track students, indicators of student learning, and student progress.
  • Work with the leadership team to collect and report data for METC’s balanced scorecard and help translate the results so that they can be used for organizational improvement.
  • Collect, analyze, and report basic data on the institution that might be needed for external reporting.
  • Design and evaluate training programs:
    Work with other academic offices responsible for the design and implementation of training to incorporate evaluation from the office’s inception.
    Examine the full gamut of training programs, and determine the types of evaluations that might be appropriate for each. Generally accepted models of evaluation have several stages that involve increasingly more complex and expensive measures. The office could help determine which programs would warrant the higher and more complex levels of evaluation that would require following up with supervisors and others in the field to determine the impact on performance.
    Communicate and disseminate results in ways that allow them to be used to improve training.
  • Work with program accreditation committees to understand the types of data and reporting required, and ensure that they are feasible.

The roles and responsibilities of the OIR are likely to change as it matures, but it is important to lay the groundwork now and to ensure that these functions are housed somewhere within METC, either in the OIR or in other offices. Perhaps the most immediate and important of these functions is to be proactive in designing a centralized warehouse for data with carefully defined, consistent data elements and data sources, clearly identifying the rationale and responsibility for data collection. The database should be designed to be flexible and adaptable so that it can easily respond to changing and additional demands as METC becomes more established and as the scope of the OIR expands. Recognizing the centrality of research and evaluation activities by establishing an OIR under the direction of an experienced institutional researcher is an important first step to becoming a high-performing, results-driven organization.

References

Dixon, Nancy M., “New Routes to Evaluation,” Training and Development, May 1996, pp. 82–85.

GAO—see U.S. General Accounting Office.

Kirby, Sheila Nataraj, Julie A. Marsh, Jennifer Sloan McCombs, Harry J. Thie, Nailing Xia, and Jerry M. Sollinger, Developing Military Health Care Leaders: Insights from the Military, Civilian, and Government Sectors, Santa Monica, Calif.: RAND Corporation, MG-967-OSD, 2010. As of October 2010:
http://www.rand.org/pubs/monographs/MG967.html

Meister, Jeanne C., Corporate Quality Universities: Lessons in Building a World-Class Work Force, Alexandria, Va.: American Society for Training and Development, 1994.

U.S. General Accounting Office, Comptroller General’s Forum: High-Performing Organizations, Washington, D.C., GAO-04-343SP, February 2004.

RAND Health Quarterly is produced by the RAND Corporation. ISSN 2162-8254.