This study aims to provide information and recommendations regarding the evaluation design of the Certified Community Behavioral Health Clinic (CCBHC) demonstration. Mandated by Congress in Section 223 of the Protecting Access to Medicare Act of 2014 (Public Law 113-93), the CCBHC is a new model of specialty behavioral health clinic, designed to provide comprehensive and integrated care for adults with mental health or substance use disorders and children with serious emotional distress. Certification criteria for the CCBHCs have been specified by the Substance Abuse and Mental Health Services Administration covering six core areas: staffing; accessibility; care coordination; scope of services; quality and other reporting; and organizational authority, governance, and accreditation. In addition, services provided to Medicaid enrollees in CCBHCs will be reimbursed through one of two alternative prospective payment systems. At present, 24 states have been awarded grants to begin the planning process for implementing CCBHCs. Of these states, eight will be selected to participate in the demonstration project beginning in January 2017. Results from the evaluation will inform mandated reports to Congress over the two-year demonstration period and the three years following the end of the demonstration.
To inform this study, the RAND team conducted a series of key informant interviews with representatives from national advocacy organizations and state mental health officials and reviewed the planning grant applications. The interviews covered the priority research questions and the specific data sources available for evaluation purposes. Two important lessons for the evaluation design emerged from these interviews. First, enormous disparity exists across states in the availability of data that could potentially inform the CCBHC evaluation, including variability in Medicaid claim data, other human services utilization data, and health and social functioning outcomes data. This variability across states will present challenges and opportunities to the evaluation, but the specific data sources available at the state level will be known only after the demonstration states have been selected. Second, the CCBHCs are being implemented at a time when multiple service delivery innovations that target the same or overlapping patient populations are being tested. Other models being tested share some of the same goals as the CCBHCs, such as improving integration between mental health and substance use treatment and integration of behavioral health and general medical care. The evaluation of the CCBHC demonstration project will be more relevant to this complex policy context to the extent that it can provide evidence of the program's performance relative to other innovative service delivery models.
Logic Model and Evaluation Domains
The evaluation design is guided by a logic model of the CCBHC that has six linked domains ordered according to Donabedian's structure-process-outcome model for assessing quality of care (Donabedian, 1980, 1982, 1988) (Figure 1). The CCBHC structures are the model components explicitly required by the legislation. These model structures are intended to directly improve access to a broader scope of evidence-based behavioral health care services and enhance the quality of services provided in specialty behavioral health clinics. If successful, these direct effects are expected to have positive downstream impacts on utilization of health care, including inpatient stays and emergency department (ED) utilization, the health and social functioning of consumers treated in CCBHCs, and costs of care to Medicaid and the clinics.
The logic model can guide identification of the research questions for the evaluation, which can be divided into two types: implementation questions and impact questions. Implementation questions will address the ways in which the model is realized in practice, including barriers to implementation that might affect outcomes, with a particular emphasis on those aspects of structure meant to have a direct effect on access to and quality of care. Examples of implementation questions include the following:
What types of behavioral health services, including care management and coordination, do CCBHCs offer?
How do CCBHCs establish and maintain formal and informal relationships with other providers?
How do CCBHCs respond to prospective payment systems?
How do states establish and maintain prospective payment rates?
How do CCBHCs attempt to improve access to care?
How do clinics collect, report, and use information to improve quality of care?
How do states collect, report, and use information on quality of care?
Impact questions will address the effect that the CCBHC model has on processes and outcomes of care, relative to existing delivery models. Examples of impact questions include the following:
Relative to comparison groups, do CCBHCs expand access to behavioral health care?
Relative to comparison groups, do CCBHCs improve the quality of behavioral health care?
Relative to comparison groups, do CCBHCs improve patterns of total health care utilization?
Relative to comparison groups, do CCBHCs improve consumers' health and functioning outcomes?
Relative to comparison groups, do CCBHCs impact federal and state costs for behavioral health services?
Three major sources of data to address the research questions were identified. First, information on the implementation of the CCBHC model will be generated as part of the functioning of the program, i.e., without imposing any additional data collection burden on the CCBHCs. These sources include documentation of compliance with the certification criteria as part of the application process for the demonstration, required quality measure reporting from the CCBHCs and the states, and cost reports. Second, the evaluation can collect additional data on implementation of the CCBHC model using a range of methods, including quarterly reports (QRs) from CCBHCs, surveys of providers, or qualitative studies of selected CCBHCs. These sources can be used to elaborate the description of how the CCBHCs were implemented, providing important details on change over time and strategies for overcoming implementation barriers. Third, data from Medicaid claims or encounters covered by managed care payments can be analyzed to address the evaluation's impact questions. The availability and suitability of these data are likely to vary dramatically across states, but they provide a uniquely valuable source of information and are a requirement for state participation in the demonstration. They are particularly valuable because they provide information on comparison groups and CCBHCs. In addition to claims or encounter data, some states have other data sources on consumer outcomes or service utilization that can be useful to the evaluation, although the existence of such data sets will not be known until the demonstration states are selected.
Table 1 summarizes our conclusions regarding the availability of data for each evaluation domain. As shown in the table, all of the evaluation domains are at least in part covered by the four major sources of existing data: the certification process conducted by states (as reported in the applications for the demonstration or obtained directly from states), clinical data derived from electronic health records (EHRs) (including the required quality measures and Mental Health Statistics Improvement Program [MHSIP] surveys), claims and encounter records, and cost reports. These sources have the potential to provide a robust description of CCBHC implementation in the demonstration states. However, these data sources are highly variable in format and detailed content across states and across providers and payers within states. Some states already have sophisticated, integrated data systems that can create uniform, detailed, and reliable data on clinic performance, but most states are not yet at that point. In addition, state data systems are in flux, with new capabilities being added frequently. The quality and timing of data also vary depending on whether a consumer is covered under Medicaid fee-for-service, a managed care organization (MCO), or some combination of the two. Fee-for-service claims have the advantage of being more complete and accessible, because they are publicly adjudicated, but MCO encounter data can be available more quickly and better suited to analysis of quality of care, which MCOs are already likely to be assessing. The evaluation design will need to take these variations into account in assessing the potential use of data sources in each demonstration state.
Table 1. Evaluation Data Sources, by Domain
|Data Source||Structure||Access||Quality||Utilization||Costs||Health and Functioning|
|Existing clinic operations data||Demonstration application||—||Demonstration application||CRs||CRs||Transformation Accountability System, National Outcome Measures|
|New clinic operations data||QRs||QRs||—||—||QRs||—|
|Existing claims and encounter data||—||Required measures||Required measures||Required measures||Required measures||—|
|New claims/encounter data||—||Additional measures||Additional measures||Additional measures||Additional measures||—|
|Existing clinical data||—||EHRs/quality measurea||EHRs/quality measures||EHRs/quality measureb||—||EHRs/qualitymeasurec|
|New clinical data||—||EHRs/additional measures||EHRs/additional measures||EHRs/additional measures||—||EHRs/additional measures|
|Existing surveys||—||MHSIP Consumer and Family||MHSIP Consumer and Family||—||—||MHSIP, quality measured|
|New surveys||Provider and clinic survey||Provider and clinic survey||Provider and clinic survey||—||—||—|
|New qualitative data||Interviews, site visits||Site visits||Consumer focus groups||—||Site visits||—|
SOURCE: Authors' analysis.
NOTE: CR = cost report
Each clinic must report a measure of timely access to care.
Each clinic must report a measure of hospital readmissions.
Each clinic must report a measure of depression remission.
Each state must report one measure of housing status for CCBHC clients.
Gaps in the reporting requirements with respect to structure could be filled through strategically targeted new data collection efforts, which would most likely include additional structured reporting of operational data by CCBHCs, surveys of providers, and qualitative investigations of specific operational issues. Additional electronic health record-based quality measures could be requested from CCBHCs, provided that the additional reporting burden is minimal, and additional quality measures could be specified when the evaluation has access to the claims or encounter data. Data collection and reporting systems put in place for the reporting required by the demonstration could be expanded to include additional instruments to minimize costs and burden.
Data Sources for Comparison Groups
Collection of data for comparison purposes presents additional challenges. To assess the impact of the CCBHC model, the services received by consumers in a CCBHC should be compared with services that similar consumers received in other clinical settings. Clinical data and claims and encounter data are likely to be available for comparison purposes for the evaluation, provided that an appropriate comparison group can be identified in the relevant data set. For EHR-based measures, this requires that comparison clinics be found with EHRs that can produce measures to the same specifications, likely adding substantial burden to the evaluation. For claims and encounter data, this requires that states or MCOs have place-of-service codes or provider identifiers that will allow them to reliably attribute consumers to specific clinics. In fact, several states are using their planning grants to put such systems in place.
Collection of operational and survey data from comparison sites presents more serious challenges. Some states already might have systematic reporting by CMHCs that could be used to provide data comparable to that collected for CCBHCs, but those systems are even more variable across states than the claims and encounter data systems. Some clinics in some states already might be reporting similar data to SAMHSA because they are participating in another federal program. Existing consumer survey data, which exist in the majority of states, offer another potential source of comparison data, but sampling methods vary across states and might not provide appropriate controls without specific adjustments for that purpose. There is no alternative to investigating these potential data sources on a state-by-state basis. New data collection efforts at comparison clinics are feasible, but likely to be costly.
Burden of Data Collection
Assessing the burden posed by data collection is a critical component of evaluation design and a requirement of the Office of Management and Budget in certain circumstances. The evaluator must weigh the benefit of new data collection by CCBHCs, and potential comparison sites, against the burden on clinic staff and the resulting quality of the data. If providing data beyond the demonstration requirements is very burdensome on participating clinic staff, then they are less likely to put in the effort needed to ensure complete or valid data. This is especially a concern for comparison sites because they do not face the same incentives to produce even the data required of participating sites by the demonstration.
The largest burden would likely result from a request for EHR-based measures from CCBHCs or comparison clinics if they are not currently reporting those measures. The burden of additional EHR-based quality measures would depend on the CCBHCs' ability to specify them within their systems. In some cases, reporting on an additional measure could be a relatively simple change to make, but the same measure might not be easily reported in other cases. Likewise, collecting the data from comparison clinics will depend on the EHR systems in those clinics, which can be less sophisticated technologically than the CCBHCs'. Additional burden would be imposed on CCBHCs should the evaluation require quarterly reports, a consumer survey other than the MHSIP, a provider survey, provider interviews, or site visits. Ideally, the evaluator would collect analogous information from comparison sites, but doing so might require some type of incentives offered by the state or the evaluator.
Limitations of Available Data
Our exploration of data sources also identified two important gaps that can limit the evaluation in significant ways. First, while the PAMA intends to have population-level impacts on access to behavioral health care, data to support population-based measures of access are lacking. It might be possible to link data from clinic records to population data from the census on the areas that the CCBHCs are designed to serve. However, the population of untreated seriously mentally ill adults is likely to be poorly tracked in the census data and highly mobile across catchment areas. Second, data on use of nonmedical social services also are lacking. Many consumers can also receive services, such as temporary housing, which are funded through state or local services. Lack of more comprehensive data on service use is concerning because of the possibility of shifting care between providers. For instance, consumers who are receiving services from locally funded providers can instead opt to receive those services from a CCBHC, where they will be funded through Medicaid. In the absence of data on the locally funded services, this shift in payer will appear to be an increase in utilization.
Selection of Comparison Groups
Addressing the impact questions will require the selection of comparison groups from within each of the demonstration states, against which the performance of the CCBHCs can be assessed. The evaluation will have a number of options in selecting appropriate groups, such as whether to select comparisons at the clinic level or the individual consumer level. Constraints of the available data and the nature of the mental health systems in the demonstration states will need to be investigated to inform these decisions. Comparison groups should be selected so that the findings can be interpreted with respect to well-understood existing models of care. The states awarded a CCBHC planning grant already have specified potential comparison groups in their grant applications and are working on refining these plans during the planning grant period. In developing a detailed strategy for addressing the impact questions, the evaluation will need to assess and refine these analytic recommendations from the states. The evaluation can elect to focus on a subset of states to address particular impact questions, based on the assessment of the value of the available data.
Based on the findings of this study, we recommend that the evaluation of the CCBHC demonstration project have three components. First, the evaluation should compile profiles of the mental health systems in each of the demonstration states. The state profiles will include information on the current delivery models, which might serve as informative comparison groups, and the data sources available in the state that could be used for the evaluation. Second, the implementation questions can be addressed through a mix of existing data sources and supplemental data collection efforts (Table 2). The existing data can be summarized systematically across states to provide a basic description of how the CCBHC model was implemented. However, supplemental data collection can add considerable depth to those descriptions. The particular selection of supplemental data collection efforts will be guided in part by the state profiles. The specific methods used will depend on the budget for the evaluation and the need to minimize the burden of the evaluation on the CCBHCs.
Table 2. Coverage of Implementation Questions by Core and Supplemental Data Elements
|Research Question||Core Data Element||Supplemental Data Element|
|What types of behavioral health services, including care management and coordination, do CCBHCs offer?||Description of services and staffing||Challenges of providing services; factors influencing selection of services; change in services over time.|
|How do CCBHCs establish and maintain formal and informal relationships with other providers?||Description of relationships with designated collaborating organizations||Relationships with network of community providers; data-sharing agreements; referral tracking|
|How do CCBHCs respond to prospective payment systems?||Submitted claim or encounter data and CRs||Administrators' and clinicians' perspectives; accounts of operational impact|
|How do states establish and maintain prospective payment rates?||Analyses supporting rate-setting and revision||Policymakers' perspectives|
|How do CCBHCs attempt to improve access to care?||Policies as described in the certification process; timeliness of care as described in quality measures||Outreach efforts; clinical processes for tracking access; barriers to increasing access|
|How do clinics collect, report, and use information to improve quality of care?||Extent of reporting of required measures||Use of data to improve quality; monitoring of quality beyond required measures|
|How do states collect, state-specific report, and use information on quality of care?||Extent of reporting of required measures||Use of data to improve quality; monitoring of quality beyond required measures|
SOURCE: Authors' analysis.
Third, the impact questions can be addressed primarily through analysis of the claims and encounter data, with additional information drawn from cost reports or, potentially, other state specific data sets (Table 3). The specific comparisons drawn to examine the impact of the model and the statistical approach taken to the analysis will depend in part on the state profiles.
Table 3. Sources of Data for Addressing Impact Research Questions
|Impact Question||Claims/Encounters||CR||Other State Data|
|Relative to comparison groups, do CCBHCs expand access to behavioral health care?||✓||—||✓|
|Relative to comparison groups, do CCBHCs improve the quality of behavioral health care?||✓||—||✓|
|Relative to comparison groups, do CCBHCs improve patterns of total health care utilization?||✓||—||—|
|Relative to comparison groups, do CCBHCs improve consumers' health and functioning outcomes?||—||—||✓|
|Relative to comparison groups, do CCBHCs affect the costs of care from the state and federal perspectives to clinics and consumers?||✓||✓||—|
SOURCE: Authors' analysis.
The proposed design guidelines are flexible, allowing for considerable tailoring of the methods to the evaluation budget while focusing on a set of core components that will provide information for reports to Congress. The evaluation can be efficient, making maximum use of existing data to reduce costs, timely, and relevant to the complex contemporary policy context. However, some new data collection efforts will be required to explore important issues affecting implementation of the model over time. Results of the evaluation will provide a strong basis for the mandated reports to Congress and the ultimate recommendations to continue, discontinue or modify the model.