The Affordable Care Act (ACA), signed into law on March 23, 2010, greatly expanded access to health insurance in the United States. The law's major health insurance coverage provisions took effect beginning in 2014, including expansions to the Medicaid program, rating reforms in the individual market, federal subsidies for Marketplace enrollees, and an individual mandate requiring most Americans to obtain coverage or pay a tax penalty. Many of the law's provisions will be phased in over time; for example, the individual mandate penalties reach their maximum level in 2016, and a mandate requiring employers to offer coverage is scheduled to take effect in 2015. As the major reforms are rolled out, it is important to have timely information on the law's effects in terms of who has become newly insured, what type of insurance they have chosen, and whether there are any unintended consequences, such as reduced access to employer-sponsored coverage.
We developed the RAND Health Reform Opinion Study (RHROS) to get timely information on how the law is affecting families in the United States. The survey relies on the RAND American Life Panel (ALP), a sample of individuals who have agreed to participate in panel surveys and who can be accessed quickly when new questions arise. The underlying sample is nationally representative, with an oversample of low-income individuals who are most likely to be affected by the law. The survey is Internet-based, and respondents without their own computers are provided with computers and free Internet service to enable them to participate.
Several studies conducted by other organizations also track health insurance enrollment and responses to the ACA. Gallup and the Urban Institute both operate surveys that, like RHROS, can be analyzed quickly to provide real-time information on emerging trends. Federal agencies conduct much larger surveys to track health insurance trends, including the National Health Interview Survey and the Current Population Survey. These federal surveys, while providing more information than the smaller, private surveys, often take longer to field, clean, and analyze.
The RHROS is unique in two ways. First, each time we field the survey, we ask the questions to the same group of respondents. By contacting the same respondents each time, we are able to track transitions in coverage. This allows us to assess not only the net changes in health insurance coverage, but also the numbers of those who gain and lose insurance. Furthermore, we can track transitions between types of insurance. The ACA changes the options that individuals face; some who were previously insured may transition to insurance through Medicaid or through the Marketplaces. Understanding these transitions, not just the overall sources of coverage, is key to understanding the impact of the ACA. Second, we typically achieve a response rate of 60–70 percent within two weeks of fielding a new survey module, which allows the data to be analyzed quickly to provide insight into emerging policy issues.
In this article, we describe the methodology used to recruit participants into RHROS and to weight the data to make the survey estimates nationally representative. This article provides background for estimates that will be produced using data from surveys that we are conducting between November 2014 and December 2015. Over time, we have made a several refinements to the methodology underlying previous RHROS data collection and analysis methods, which are described elsewhere (Carman and Eibner, 2014). In this article we provide a brief explanation for these improvements to our prior methodology.
Survey participants for the RHROS are drawn from the ALP. The ALP began surveying respondents in January 2006; since that time, more than 400 surveys have been fielded. The ALP is a nationally representative Internet panel that includes both a probability and a convenience-based sample. Participants in the probability sample were recruited via probability-based mail and random-digit-dial sampling methods. The convenience sample includes a “snowball” sample in which participants were given the opportunity to invite friends and acquaintances to participate and a respondent-driven cohort that sampled enrollees through social networks. Unlike opt-in Internet surveys, Internet access is not required to participate; those who do not already have Internet access or computers are provided with laptops and Internet access. Respondents are compensated for each survey at a rate of $20 per half hour, prorated for shorter surveys. Over the history of the ALP, recruiting methods have evolved. Detailed information about the sample composition and the past recruiting methods can be found at https://alpdata.rand.org/?page=panelcomposition (RAND American Life Panel, 2014). For the RHROS, we limit our sampling frame to respondents in the ALP aged 18 to 64; those 65 and older are excluded because they are typically eligible for Medicare and thus significantly less likely to be uninsured or have their insurance coverage affected by the ACA.
Over time, we have made several refinements to our methodology to improve the validity of our results. In Carman and Eibner (2014), we relied on the full ALP panel, which includes respondents who were recruited via both probability-based methods and convenience methods.* In all subsequent analyses we are excluding those recruited via convenience methods. While this reduces our sample size, it also reduces the risk of bias that reliance on the convenience sample introduces. This bias is related to the fact that it is not possible to correct for correlation that may exist between observations (i.e., sampled individuals) in the convenience samples. With convenience samples in general, and snowball samples in particular, observations may not be independent of each other, leading to an oversampling of some subsets of the population. The behavior of these individuals may be more highly correlated than the behavior of randomly selected individuals. There is no generally accepted method to account for these inter-observation correlations. Probability sampling removes this threat and reduces the bias associated with it.
Approximately 3,600 respondents ages 18 to 64 from the ALP probability sample will be invited to participate in our 2014 and 2015 surveys, though participation will vary. In the section below that describes weighting, we provide summary statistics of the demographic characteristics of those invited to participate in our surveys. These demographic characteristics are updated quarterly by all panel members.
Survey Timing and Items
Between November 2014 and December 2015 we will conduct four surveys asking respondents about their health insurance coverage. The first survey took place at the beginning of the 2015 healthcare.gov open enrollment period; it was in the field from November 11, 2014, until December 1, 2014. The second survey was fielded at the end of the open enrollment period, between February 16, 2015, and March 2, 2015. The third survey will be fielded after the April 15, 2015, tax filing deadline. The final survey will be fielded in late summer or early fall of 2015. We have left the precise dates of these surveys open to allow for any changes that may occur over the course of the year. For example, we may adjust the dates for the April survey if a special open enrollment period is allowed after taxes are filed. Each survey will remain in the field for approximately two weeks. Most respondents in the ALP respond in the first few days, but by the end of two weeks, typically 60 to 70 percent of those invited respond.
Each survey will be brief and will be designed to take approximately two minutes per respondent. These surveys build on the surveys used for the RHROS in 2013 and 2014, which tracked both health insurance coverage and opinions of the ACA during the first open enrollment period. Questions have been added to focus on access to and changes in health insurance coverage. Most of the survey will remain constant over the course of the project, except for time-sensitive items or those for which there is more salience at a particular time of year.
First, respondents are asked whether or not they have insurance coverage. If they do have coverage, they are then asked about the source of their insurance coverage. In contrast to RHROS surveys conducted before November 2014, we now include preloaded names of state exchanges and locally used names for Medicaid programs (such as Medi-Cal in California). This is intended to make it easier for respondents to identify their type of insurance. However, some respondents may still have difficulty identifying their source of care. Thus, we also allow them to write in their source of care.
For surveys conducted in April and September 2015, respondents who reported health insurance coverage in February 2015 will be shown their previous response and asked if they have had a change in their insurance coverage. This is designed to reduce respondent burden by reducing the number of questions a respondent must answer. For example, a respondent might see “Previously you told us that you have Medicaid. Is that still correct?” Respondents can report a change in coverage or a mistake in their previous response.
RHROS data collected between September 2013 and May 2014 showed an increase in the number of people who were previously uninsured gaining coverage through employer-sponsored insurance (ESI). In order to test the extent of take-up of existing ESI offers versus changes in access to ESI offers, we ask all respondents if they have access to health insurance through their employer or a family member's employer.
Finally, we ask respondents if they have had a change in health insurance coverage and the cause of this change. We ask this question of all respondents because some may have a change in coverage without having a change in coverage type. For example, people who get a new job may be covered by a new insurer even if they still are covered by ESI.
In addition to these questions, each survey will contain questions that are particularly pertinent at the time the survey is fielded. In November 2014, we asked about expected coverage for 2015 and whether respondents had difficulty accessing health care. In February 2015, we asked about awareness of King v. Burwell, which was argued before the Supreme Court in early March. After income taxes are due in April 2015, we might ask about whether people had to pay a penalty for not being insured in 2014. However, we leave the precise questions undefined in order to be responsive to current events.
After the survey has been fielded, we will clean the raw data and apply a hierarchy so that each respondent is assigned to a primary type of insurance coverage. Some respondents write in the source of their insurance coverage. For these respondents, we assign them to one of the primary categories when possible. In some cases, this is easy; for example, those who write in “work” are assigned to ESI. In other cases, respondents include the name of a program that can be matched to a type of insurance in their state of residence. In some cases, respondents provide the name of an insurer; in that case it is not possible to assign a source of insurance because the same insurance company could offer insurance through multiple channels. If a respondent reports a mistake in previously reported coverage, their past responses are adjusted to reflect the correct source of coverage.
For respondents who report more than one source of insurance, we assign a primary insurance source according to the following hierarchy:
- Medicaid (excluding those dually enrolled in Medicaid and Medicare)
- ESI (including retiree insurance)
- insurance through a Marketplace plan
- other forms of insurance (including Medicare, dual Medicaid-Medicare enrollees, military or U.S. Department of Veterans Affairs (VA) insurance, and other governmental plans)
- private non-Marketplace insurance
- no insurance.
The first type of insurance listed in the hierarchy is considered the primary insurance type. Approximately 5.6 percent of our sample report more than one type of insurance. As such, the hierarchy has no effect on the majority of the sample, who report either no insurance or only one type of insurance. The hierarchy reported here is a refinement of the methodology used in Carman and Eibner (2014) and makes two changes to our prior approach. First, we now classify those with retiree insurance as having ESI rather than “other.” This is a more specific and accurate representation of their source of insurance. Second, we now place Marketplace coverage below Medicaid and employer coverage in the hierarchy. In the previous report, insurance through the Marketplace was given priority in the hierarchy. However, because those with employee insurance or Medicaid would not be eligible for subsidies on the Marketplaces, we felt it was unlikely that an individual would have both coverage sources, and that these reports of dual coverage were likely errors. We made the judgment call that employer or Medicaid reports would be more accurate than the Marketplace reports because the Marketplaces are very new and prior studies have shown that people do not always fully understand this market (Pascale et al., 2013).
Because surveys cannot reach all members of the population, we create weights so that results are representative of the population overall. Weights are widely used in survey research. We apply a two-step approach. In the first step, we use a raking algorithm, following Deming (1943) and Deville et al. (1993), to match the distribution of characteristics in our sample as of September 2013 to the estimates of the distribution of characteristics of the population aged 18 to 64 from the 2013 Current Population Survey (CPS). We aimed to match population proportions on interactions of gender and race/ethnicity, gender and education, gender and age, and household income interacted with household size. In order to create weights, it is necessary to account for missing values of certain weighting variables for some observations. Missing values have been rare in previous waves of RHROS, with less than 0.5 percent of values missing for each variable used in weighting. We impute missing values sequentially, beginning with the more basic (and less frequently missing) demographic traits of gender, age, and citizenship that replace missing values with the modes of each variable. The remaining missing variables are then imputed using linear regression for continuous variables and logistic regression for discrete variables (including multinomial logistic regression or ordinal multinomial logistic regression for discrete variables with more than two outcomes).
In the second step, we create nonresponse weights to adjust estimates for nonparticipation in later surveys among those who completed the September 2013 survey. The inverse probability weights account for differences between respondents and nonrespondents on observable factors that are used to predict participation in the later survey. Factors to be included in the inverse probability weights include gender, age, age squared, family income, education, household size, race, whether one was born in the United States, job status, and type of work. Inverse probability weights are calculated using a logistic regression model of participation in the later survey as a function of the observed factors mentioned above. We divide the weight calculated in the first step by the predicted probability of responding in the second step to calculate the final weight.
The two-step weighting algorithm represents a refinement to the methodology used in Carman and Eibner (2014). There we used a weighting algorithm with only one step, calculating weights to match the distribution of characteristics of respondents responding in both September 2013 and March 2014. Adjusting our weights to account for nonresponse allows us to account for patterns in nonresponse related to demographic characteristics. If some groups disproportionately fail to participate in future surveys, then the one-step algorithm would underweight the responses of the individuals in those groups who did respond in both surveys.
Table 1 provides summary statistics of the RHROS sample, with and without weights, and to the comparison group in the CPS. Because we match our sample to multiple characteristics of the CPS, we do not perfectly match the CPS on each dimension. However, Table 1 shows that our weighted data are very similar to the CPS data on all characteristics analyzed.
Table 1. Characteristics of 3,617 Panel Members Age 18 to 64
|Variable||RHROS Unweighted||RHROS Weighted to CPS||CPS|
|Less than high school||6.0%||8.9%||12.0%|
|Income less than $30,000||31.3%||28.5%||25.0%|
|Income ≥ $75,000||19.7%||19.6%||23.0%|
NOTE: In this table, we base our sample on all probability sample members of the ALP invited to participate in the November 2015 survey, regardless of participation.
After each survey we will produce several key pieces of output. When reporting the newest results for each of our four surveys, we will examine changes in coverage from September 2013 to the current date, as well as from November 2014 to the current date. This will allow us to observe transitions since the rollout of most provisions of the ACA (from before the first open enrollment period to the present date) as well as transitions during the second open enrollment period. We calculate the weighted share of respondents with each source of current and past insurance. We then multiply these percentages by the total number of Americans between ages 18 and 64 to estimate the number of people with each source of insurance coverage.
As with all surveys, there is a margin of error associated with each of our results. We report the margin of error as plus or minus 1.96 times the standard error, which is the margin of error that corresponds to a 95-percent confidence interval. This means that if the survey were repeated multiple times and the 95-percent confidence interval was calculated in each case, the true estimate would be within the 95-percent confidence interval in about 95 percent of the repeated surveys. When calculating margins of errors for differences between two time periods, we use a bootstrap methodology (following Efron and Tibshirani, 1994), which accounts for the fact that responses in the two periods are likely to be correlated because the same respondents were contacted in both periods. The margin of error also accounts for the survey weights. When comparing these results to other surveys, it is important to note that the margin of error is a function of the sample size, with larger sample sizes leading to a smaller margin of error and therefore a more precise estimate.**
The tables below provide examples of the type of analysis that we will conduct using the RHROS. The example compares data collected in September 2013 to data collected in November 2014. Table 2 shows the change in insurance coverage for each insurance type. The net changes in insurance shown in Table 2 are very consistent with other studies (Sommers et al., 2014; Collins et al., 2014; and Long et al., 2014), validating that the RHROS produces consistent information about Americans' response to the ACA. Specifically, we estimate a 12.9-million-person decline in the number of uninsured adults between September 2013 and November 2014. Because the margin of error on our estimates is plus or minus 6 million, our results imply that the true change likely falls within the range of 6.9 to 18.9 million. Using the Urban Institute's Health Reform Monitoring Study, Long et al. (2014) found a 10.6-million-person decline in uninsurance among adults ages 18 to 64 between September 2013 and September 2014. A prior study using data from Gallup found a 10.3-million-person decline in the number uninsured between September 2013 and June 2014 (Sommers et al., 2014). Like the RAND estimate, both of these estimates have a relatively wide margin of error. For example, Sommers et al. report that the 95-percent confidence interval surrounding their estimate ranges from 7.3 to 17.2, and Long et al. (2014) report a 95-percent confidence interval ranging from 8.5 to 12.6. There is substantial overlap in these confidence intervals reported in all three analyses, and our estimate is within the range reported in prior studies.
The analysis in Table 2 further shows that the largest gains in insurance coverage occurred in the individual market (driven by take-up of Marketplace plans) and in the employer market. The increase in employer coverage is likely driven in part by the individual mandate, which may be prompting employees to take offers of coverage that were previously available to them.
Table 2. Net Changes in Insurance Coverage from September 2013 to November 2014
|ESI||115.3 (+/– 7.6)||121.9 (+/– 7.4)||6.6 (+/– 5.9)|
|Medicaid||10.4 (+/– 2.5)||21.2 (+/– 4.1)||10.8 (+/– 3.8)|
|Self-pay||8.5 (+/– 2.6)||7.2 (+/– 2.2)||(1.3) (+/– 2.3)|
|Marketplace||— —||7.6 (+/– 2.4)||7.6 (+/– 2.4)|
|Other||24.0 (+/– 5.9)||13.3 (+/– 4.6)||(10.7) (+/– 4.6)|
|Subtotal: insured||158.3 (+/– 6.3)||171.3 (+/– 5.4)||12.9 (+/– 6.0)|
|Uninsured||40.2 (+/– 6.3)||27.3 (+/– 5.4)||(12.9) (+/– 6.0)|
NOTE: Numbers in italics show margins of error.
The results presented in Table 2 do not make it clear how people have transitioned across insurance categories. For example, we cannot tell from Table 2 what percentage of people enrolled in Marketplace coverage were previously uninsured. Because the RHROS is longitudinal, we have the ability to observe how coverage has changed over time. Tables 3 and 4 draw on the longitudinal nature of these data, highlighting transitions in health insurance. Table 3 shows that a total of 20.4 million people transitioned from uninsured to insured status, while another 7.4 million became uninsured, for a net gain in coverage of 12.9 million.
Table 3. Transitions in Insurance Coverage from September 2013 to November 2014
|Uninsured||19.8 (+/– 4.71)||20.4 (+/– 4.90)||40.2 (+/– 6.38)|
|Insured||7.4 (+/– 3.31)||150.9 (+/– 6.81)||158.3 (+/– 6.38)|
|Total||27.3 (+/– 5.53)||171.3 (+/– 5.53)||198.5|
NOTES: All numbers (including margin of error) are in millions of individuals. Light gray cells show categories that did not change from 2013 to 2014 (i.e., individuals who experienced no transition). Dark gray cells show numbers of transitions from 2013 to 2014. Numbers in italics show margins of error.
In Table 4, we look at transitions in insurance at a detailed level, considering not only transitions from insured to uninsured status but also transitions between types of insurance (e.g., ESI to Marketplace coverage, ESI to Medicaid coverage, etc.). One concern policymakers have raised in the past is that Medicaid expansion may cause people to drop private insurance coverage, and, as a result, it could lead to federal spending increases that are high relative to the proportion of newly insured individuals. Table 4 suggests that this type of “crowd-out” is small in our data; only 1.5 million out of 21.2 million Medicaid enrollees in 2014 were previously enrolled in employer coverage. However, among the 7.6 million people enrolled through the Marketplaces, only 3.1 million were previously uninsured. This figure implies that more than half of Marketplace enrollees had coverage from another source prior to the ACA. Among those transitioning from uninsured to insured status between 2013 and 2014, more than 35 percent (7.3 million out of 20.4 million) became insured through employer coverage.
The ability to track these types of changes across insurance categories is one of the key advantages of the RHROS data set. However, a limitation of our analysis is that the sample sizes in our survey are small. As a result, the margins of errors reported in our tables are relatively wide. Ultimately, large longitudinal data sources conducted by the federal government, such as the Survey of Income and Program Participation and the Medical Expenditure Panel Survey, will provide more precise estimates of the number of people transitioning across insurance categories. But the RHROS data can provide more timely estimates of these transitions, with a greater ability to tailor survey questions to address emerging policy concerns.
Table 4. Transitions Across Insurance Categories from September 2013 to November 2014
|No Insurance||ESI||Medicaid||Individual Market
|No Insurance||19.8 (+/– 4.71)||7.3 (+/– 3.77)||7.5 (+/– 2.88)||1.5 (+/– 1.05)||3.1 (+/– 1.32)||1.0 (+/– 0.74)||40.2 (+/– 6.38)|
|ESI||3.4 (+/– 1.71)||105.9 (+/– 7.70)||1.5 (+/– 1.25)||1.1 (+/– 0.66)||2.2 (+/– 1.63)||1.2 (+/– 0.97)||115.3 (+/– 7.70)|
|Medicaid||0.5 (+/– 0.36)||0.8 (+/– 0.70)||7.7 (+/– 2.18)||—||0.5 (+/– 0.66)||1.0 (+/– 0.70)||10.4 (+/– 2.53)|
|Individual Market||0.7 (+/– 0.70)||2.3 (+/– 1.63)||0.0 (+/– 0.06)||4.5 (+/– 1.79)||0.9 (+/– 0.70)||0.1 (+/– 0.13)||8.5 (+/– 2.61)|
|Other||2.8 (+/– 2.72)||5.6 (+/– 3.00)||4.5 (+/– 1.83)||0.2 (+/– 0.20)||0.9 (+/– 0.74)||10.1 (+/– 4.51)||24.0 (+/– 5.99)|
|Total||27.3 (+/– 5.53)||121.9 (+/– 7.47)||21.2 (+/– 4.16)||7.2 (+/– 2.18)||7.6 (+/– 2.41)||13.3 (+/– 4.67)||198.5|
NOTES: All numbers (including margin of error) are in millions of individuals. Light gray cells show numbers that did not change from 2013 to 2014 (i.e., individuals who experienced no transition). Numbers in italics reflect margins of error.
Carman KG and Eibner C, Changes in Health Insurance Enrollment Since 2013: Evidence from the RAND Health Reform Opinion Study, Santa Monica, Calif.: RAND Corporation, RR-656-RC, 2014. As of April 29, 2015:
Collins SR, Rasmussen PW, and Doty MM, “Gaining Ground: Americans' Health Insurance Coverage and Access to Care After the Affordable Care Act's First Open Enrollment Period,” The Commonwealth Fund, July 2014. As of July 24, 2014:
Cordova A, Girosi F, Nowak S, Eibner C, and Finegold K, “The COMPARE Microsimulation Model and the U.S. Affordable Care Act,” International Journal of Microsimulation, Vol. 6, No. 3, 2013, pp. 78–117.
Deming WE, Statistical Adjustment of Data, New York: Wiley, 1943.
Deville JC, Särndal CE, and Sautory O, “Generalized Raking Procedures in Survey Sampling,” Journal of the American Statistical Association, Vol. 88, No. 423, 1993, pp. 1013–1020.
Efron B and Tibshirani RJ, An Introduction to the Bootstrap, Boca Raton, FL: CRC Press, 1994.
Long SK, Karpman M, Shartzer A, Wissoker D, Kenney GM, Zuckerman S, Anderson N, and Hempstead K, “Taking Stock: Health Insurance Coverage Under the ACA as of September 2014,” Urban Institute, Health Reform Monitoring Survey, December 3, 2014. As of April 29, 2015:
Pascale J, Rodean J, Leeman J, Cosenza C, and Schoua-Glusberg A, “Preparing to Measure Health Coverage in Federal Surveys Post-Reform: Lessons from Massachusetts,” Inquiry, Vol. 50, No. 2, May 2013, pp. 106–123.
RAND American Life Panel, “Panel Composition,” 2014. As of April 16, 2015:
Sommers BD, Musco T, Finegold K, et al., “Health Reform and Changes in Health Insurance Coverage in 2014,” New England Journal of Medicine, Vol. 371, 2014, pp. 867–874. As of July 24, 2014:
* The ALP also has several non—probability-based convenience subsamples, including a snowball sample. Additionally, some household members of probability-based samples have joined the ALP. These respondents are excluded from this research.
** Other factors, such as the proportion being estimated, can also affect the margin of error.
The research described in this article was conducted by RAND Health.