Not Bored Yet
Revisiting Respondent Fatigue in Stated Choice Experiments
Published in: Transportation Research Part A: Policy and Practice, Vol 46, no. 3, Mar. 2012, p. 626-644
Posted on RAND.org on March 01, 2012
Read MoreAccess further information on this document at Transportation Research Part A: Policy and Practice
This article was published outside of RAND. The full text of the article can be found at the link above.
Stated choice surveys are used extensively in the study of choice behaviour across many different areas of research, notably in transport. One of their main characteristics in comparison with most types of revealed preference (RP) surveys is the ability to capture behaviour by the same respondent under varying choice scenarios. While this ability to capture multiple choices is generally seen as an advantage, there is a certain amount of unease about survey length. The precise definition about what constitutes a large number of choice tasks however varies across disciplines, and it is not uncommon to see surveys with up to twenty tasks per respondent in some areas. The argument against this practice has always been one of reducing respondent engagement, which could be interpreted as a result of fatigue or boredom, with frequent reference to the findings of Bradley and Daly (1994) who showed a significant drop in utility scale, i.e. an increase in error, as a respondent moved from one choice experiment to the next, an effect they related to respondent fatigue. While the work by Bradley and Daly has become a standard reference in this context, it should be recognised that not only was the fatigue part of the work based on a single dataset, but the state-of-the-art and the state-of-practice in stated choice survey design and implementation has moved on significantly since their study. In this paper, we review other literature and present a more comprehensive study investigating evidence of respondent fatigue across a larger number of different surveys. Using a comprehensive testing framework employing both Logit and mixed Logit structures, we provide strong evidence that the concerns about fatigue in the literature are possibly overstated, with no clear decreasing trend in scale across choice tasks in any of our studies. For the data sets tested, we find that accommodating any scale heterogeneity has little or no impact on substantive model results, that the role of constants generally decreases as the survey progresses, and that there is evidence of significant attribute level (as opposed to scale) heterogeneity across choice tasks.