Cover: The Feasibility of Developing a Repository of Assessments of Hard-to-Measure Competencies

The Feasibility of Developing a Repository of Assessments of Hard-to-Measure Competencies

Published Nov 3, 2015

by Kun Yuan, Brian M. Stecher, Laura S. Hamilton

Download eBook for Free

FormatFile SizeNotes
PDF file 2 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.

Research Questions

  1. How feasible is it to build a repository of assessments of hard-to-measure competencies?
  2. What are the potential challenges in building such a repository?
  3. What design features will make the repository most useful to potential users?
  4. What does experience from doing this feasibility study suggest for building such a repository in a future project?

The William and Flora Hewlett Foundation engaged RAND to conduct research related to the conceptualization and measurement of skills for deeper learning (e.g., critical thinking). This report explores the feasibility of and challenges associated with building a repository of assessments of hard-to-measure competencies, such as those associated with deeper learning.

This feasibility study focused on two aspects of building and maintaining a repository of measures. First, the authors examined the procedures needed to collect, review, document, and catalog assessments for a computerized database. As part of this effort, the authors built a small database of assessments of hard-to-measure competencies, focusing on measures applicable to K–12 students in typical school settings in the United States. Second, the authors examined web-based archives to identify the best functional modules and user-interface features to incorporate into a repository of measures. The study emphasized the collection of assessment information and materials more than the design of a website. Overall, this feasibility study was exploratory in nature. The results are encouraging, but the authors identified some challenges to be overcome in assembling the information needed for a repository.

Key Findings

It Is Possible to Find Assessments of Many Hard-to-Measure Competencies, but There Will Be Challenges in Gathering Relevant Information on Quality and Use

  • A large number of relevant assessments could be identified and cataloged, but some information about the assessments was hard to gather.
  • There are logical steps to follow to collect assessments and relevant information and prepare them for the repository, starting with clearly identifying the constructs of interest, including developing search criteria and conducting searches, and concluding with collecting and organizing the information about each identified measure.

To Be Effective, an Online Repository Should Include Five Key Functions: Search, Display, Evaluate, Use, and Connect

  • Interviews with potential repository users confirmed the potential value of such a repository and highlighted the importance of these five functions to make the repository usable.
  • A repository can serve as an important first step in helping users find measures and evaluate their suitability for decisionmaking.

A Number of Actions Will Need to Occur to Develop an Actual Repository

  • This study might be considered a pilot test for building a repository.
  • The steps needed to develop a repository include defining the goals and purposes of the repository, searching for measures of constructs, and organizing the information and materials collected.


  • Developers should create a statement that specifies the types of users and user needs the repository aims to satisfy and should include both short- and long-term goals.
  • To make key decisions about the repository, it is important to consult with researchers, educators, psychometricians, and subject-matter experts. It is also necessary to have multiple teams of experts for different project tasks and to make sure that they work closely with each other.
  • Developers should set up rules for how to search for measures to add to the repository, as well as guidelines about what to do when information about the selection criteria is unavailable. The rules for searching should be followed consistently but should also be modified over time, as appropriate.
  • Developers should set up criteria for which measures should be selected for the repository. The repository developers should err on the side of being more inclusive than less inclusive.
  • It is important to set up a template that specifies what information should be displayed in the repository for each measure, even though some key information is likely to be missing.
  • To maintain and improve the technological aspect of the repository website, developers should conduct regular technological reviews, such as checking whether links provided are still active and accurate and updating the technical functionality of the online portal and interface.
  • In terms of the funding to support the maintenance of the repository, multiple options should be considered — for example, continuing support from foundations and providing fee-based services.

The research reported here was conducted in RAND Education under a grant from the William and Flora Hewlett Foundation.

This report is part of the RAND research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.