Cover: Making Sense of Data-Driven Decision Making in Education

Making Sense of Data-Driven Decision Making in Education

Evidence from Recent RAND Research

Published Nov 7, 2006

by Julie A. Marsh, John F. Pane, Laura S. Hamilton

Download Free Electronic Document

FormatFile SizeNotes
PDF file 0.3 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Data-driven decision making (DDDM), applied to student achievement testing data, is a central focus of many school and district reform efforts, in part because of federal and state test-based accountability policies. This paper uses RAND research to show how schools and districts are analyzing achievement test results and other types of data to make decisions to improve student success. It examines DDDM policies and suggests future research in the field. A conceptual framework, adapted from the literature and used to organize the discussion, recognizes that multiple data types (input, outcome, process, and satisfaction data) can inform decisions, and that the presence of raw data does not ensure its effective use. Research questions addressed are: what types of data are administrators and teachers using, and how are they using them; what support is available to help with the use of the data; and what factors influence the use of data for decision making? RAND research suggests that most educators find data useful for informing aspects of their work and that they use data to improve teaching and learning. The first implication of this work is that DDDM does not guarantee effective decision making: having data does not mean that it will be used appropriately or lead to improvements. Second, practitioners and policymakers should promote the use of various data types collected at multiple points in time. Third, equal attention needs to be paid to analyzing data and taking action based on data. Capacity-building efforts may be needed to achieve this goal. Fourth, RAND research raises concerns about the consequences of high-stakes testing and excessive reliance on test data. Fifth, attaching stakes to data such as local progress tests can lead to the same negative practices that appear in high-stakes testing systems. Finally, policymakers seeking to promote educators’ data use should consider giving teachers flexibility to alter instruction based on data analyses. More research is needed on the effects of DDDM on instruction, student achievement, and other outcomes; how the focus on state test results affects the validity of those tests; and the quality of data being examined, the analyses educators are undertaking, and the decisions they are making.

The research described in this report was conducted within RAND Education.

This report is part of the RAND occasional paper series. RAND occasional papers may include an informed perspective on a timely policy issue, a discussion of new research methodologies, essays, a paper presented at a conference, or a summary of work in progress. All RAND occasional papers undergo rigorous peer review to help ensure that they meet high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.