There is little consistency in the way that survey response rates are calculated and reported, despite the efforts of professional, trade, and academic organizations to encourage the adoption of a standard system for survey response rate reporting. In addition, investigators and survey research organizations have differing incentives to report the most favorable response rates possible. This situation makes it difficult to compare response rates across surveys. It may also result in researchers underestimating the extent of selection bias in the survey data sets that they analyze. In this paper, we provide a case study of calculating and analyzing non-response rates in a complex survey that was recently conducted in Los Angeles County, using in-home interviews. We systematically examine rates and sources of non-response at each stage of the fieldwork, from the list of dwellings initially sent to the field through to the completion of all interviews in each household. We then analyze the effects of tract, housing, and individual characteristics on non-response.