Evidence on the Validity, Reliability, and Usability of the Measuring and Improving Student-Centered Learning (MISCL) Toolkit

Key Takeaways and Lessons on Developing Tools for School Improvement

by Julia H. Kaufman, Elizabeth D. Steiner, Jonathan Schweig, Sophie Meyers, Karen Christianson

Download eBook for Free

Full Document

FormatFile SizeNotes
PDF file 0.6 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Technical Appendix

FormatFile SizeNotes
PDF file 0.9 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Research Questions

  1. Drawing on evidence related to content, response processes, internal structure, relationships with external variables, and reliability, to what extent did the MISCL instruments consistently and precisely measure the aspects of SCL they were intended to measure?
  2. To what extent was the MISCL Toolkit usable and useful to those who undertook the Toolkit process?

Student-centered learning (SCL) describes various approaches that keep students' goals, interests, and needs central to the teaching and learning process. Despite the recent proliferation of SCL approaches, researchers and practitioners are still learning about which SCL strategies are most effective for supporting student achievement and how to measure them.

This report summarizes a study on the validity, reliability, and usability of the Measuring and Improving Student-Centered Learning (MISCL) Toolkit, which was developed to help school systems measure, understand, and reflect on the extent of SCL and equitable distribution of SCL opportunities in high schools.

To ensure that the MISCL Toolkit measured the aspects of SCL and its supports as intended, the research team collected various validity and reliability measures. To assess whether the Toolkit was used as intended and useful to practitioners, the research team collected evidence of usability through observations of Toolkit use and interviews with students and staff who used the Toolkit.

The wide variety of evidence collected for this study suggested that the MISCL Toolkit measured aspects of SCL it was intended to measure, and that the Toolkit may differentiate among levels of SCL in different schools. More research is necessary to understand how the aspects of SCL measured through the Toolkit are related to other variables.

Developers of similar toolkits may wish to prioritize design principles that maximize the potential for useful data and minimize burdens and common issues related to collecting and analyzing data in school settings.

Key Findings

The MISCL Toolkit proved to be a reliable and valid measure of SCL, but more research is needed to understand the relationships between SCL and external variables, such as student achievement

  • Toolkit instruments measured the SCL constructs as intended and may differentiate among schools with differing levels of SCL.
  • Survey scales were generally internally consistent and could be used to distinguish among the responses of individual students, instructional staff, school leaders, and district leaders.
  • Relationships between SCL — as measured through the Toolkit instruments — and external variables were not always consistent with theory.

Users found the MISCL Toolkit process understandable and useful for measuring SCL in their schools

  • Although users found that the Toolkit process provided them with useful information, they also found the process burdensome and lengthy. User concerns about the burden of Toolkit administration and its potential evaluative nature led to revisions of some Toolkit content.

Recommendations

  • Developers of data collection tools intended to support school improvement should start with a short and simple set of measures and instruments.
  • Developers of continuous improvement tools should provide a clear description of the capacity required to use those tools.
  • An emphasis on open communication and use of a nonevaluative tone — by both developers of continuous improvement tools and users — would support buy-in and use of continuous improvement tools.
  • Developers and users of continuous improvement tools should create ample opportunities for participants to explore and compare data.
  • Developers and users of continuous improvement tools should involve students in continuous improvement efforts.

Research conducted by

The research described in this report was funded by the Nellie Mae Education Foundation (NMEF) and conducted by RAND Education and Labor.

This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.