Download Free Electronic Document

FormatFile SizeNotes
PDF file 0.1 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Research Brief

Key Points

  • Distributed learning (DL) is the key to the Army's training strategy, but there are no systematic program-level assessments of DL effectiveness.
  • RAND Arroyo Center developed an approach to evaluating DL courseware that reveals strengths and needs for improvement in technical features and instructional design.
  • This method is cost-effective and should be part of a comprehensive evaluation program supporting continuous improvement in Army DL.

Since 1998, the Army's Training and Doctrine Command (TRADOC) has been engaged in establishing and fielding The Army Distributed Learning Program (TADLP) to enhance and extend traditional methods of learning. The Army intends to achieve a number of important goals through distributed learning (DL), including increased access to standardized training, improved unit operational readiness, and reduced costs. The Army envisages a greatly increased role for DL over time, and the development of interactive multimedia instruction (IMI) courseware is an important element of the training strategy.

Development and evaluation of Army DL is decentralized in individual proponent schools and centers, and there have been limited efforts to assess the effectiveness of DL training at the program level. TRADOC asked Arroyo to assess how efficiently and effectively TADLP has accomplished its objectives overall. For one component of this evaluation, the research team developed and tested a method of evaluating the instructional design and technical features of asynchronous IMI courses. Using standards from the training and development community, the team developed criteria to evaluate IMI courseware. The researchers then applied the criteria to a sample of 79 lessons from 10 high-priority courses in order to assess the feasibility of this approach for evaluating courseware in a highly resource-constrained environment, illustrate the kinds of information produced by such an evaluation, and demonstrate how that information can be used to identify areas for improvement in courseware and to monitor quality at the program level.

Some Features of IMI Courseware Need Improvement

Production-Quality Criteria for Courseware

Criterion Rating
Legibility of text and graphics 0.80
Audiovisuals
Narration easy to understand 1.00
Minimal irrelevant content 0.85
Use of animation/video to demonstrate process 0.75
Techniques to maintain learner interest 0.50
Few sensory conflicts 0.40

Teal 85–100% rated positive; Gold 70–84% rated positive; Red < 70% rated positive.

Analysis revealed that technical characteristics were the strongest features of the courseware. All courses were easy to navigate, and cues to the learner's position in the course were readily accessible. The key areas for improvement in technical features are (1) ensuring that students can launch the courseware without professional assistance and (2) linking course content with supplementary instructional resources. Providing direct access to reference materials such as glossaries and field manuals could give students powerful tools for rapidly deepening their knowledge in specific task areas.

Production quality was generally strong (see the table). Narration was easy to understand, courses had minimal irrelevant content, and graphics and text were legible. Improvement is needed, however, in eliminating sensory conflicts, such as simultaneous presentation of text and spoken narration, and in the enhanced use of multimedia.

Ratings of pedagogical characteristics revealed a number of strengths, including clear lesson objectives, appropriate sequencing of lessons, clear and comprehensive instruction of concepts, and opportunities for learners to correct their strategies in checks on learning. However, pedagogy was the area most in need of improvement. A pervasive problem was a lack of context or examples from job or mission environments. Courses also need to do a better job on instruction of procedures by providing clearer demonstrations, offering higher-fidelity opportunities for practice, and including explanations of why procedures work the way they do.

Best Practices for DL Training

The results suggest that IMI is best suited for training concepts and processes, but can be used to train procedures in some situations:

  • When procedures can be practiced realistically within the context of IMI, such as completing forms, or with the addition of simple job aids.
  • When learning is not subject to rapid decay or is easily refreshed.
  • When IMI supplements resident training.
  • When training is supported by a high level of instructor-student interaction.

The Army also can improve the quality of instruction and increase user engagement by designing IMI with higher levels of interactivity between the student and the courseware. For example, IMI that requires students to move objects on the screen can be used to train procedures such as using a compass. For more complex tasks, such as how to enter and clear a building, videogame-like simulations could be used in which learners must make decisions about appropriate methods of entry in a dynamic environment.

The Method Can Contribute to Program-Level Assessments of Training Effectiveness

The method employed by the Arroyo research team provides a systematic method of evaluation using a comprehensive set of criteria based on standards proposed by training experts. It yields quantifiable data, enabling integration of results across courses, schools, and other units. It requires relatively modest resources. By applying the method to a larger and more diverse set of courses on an ongoing basis, the Army could gain valuable information about courseware quality, identify needs for improvement, and monitor the effects of changes to training policy, development processes, or doctrine.

In addition to evaluating courseware, a comprehensive evaluation of training quality requires several other types of measures and methods, including (1) measures of outcomes (student reactions, learning, job performance, and organizational outcomes); (2) test evaluation to assess the quality of course tests; and (3) administrative data, such as completion rates, cost data, and cycle time of courseware production, which can point to potential negative or positive aspects of course quality. Taken together, these measures would provide a basis for achieving continuous improvement in the development and use of IMI and help the Army reach its strategic goals for DL.

Research conducted by

This report is part of the RAND Corporation research brief series. RAND research briefs present policy-oriented summaries of individual published, peer-reviewed documents or of a body of published work.

Permission is given to duplicate this electronic document for personal use only, as long as it is unaltered and complete. Copies may not be duplicated for commercial purposes. Unauthorized posting of RAND PDFs to a non-RAND Web site is prohibited. RAND PDFs are protected under copyright law. For information on reprint and linking permissions, please visit the RAND Permissions page.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.