Summarizes, for potential users of survey data, methods and findings from a full study of survey response errors to sensitive topics. An analysis of published measurement studies suggests that the average measurement bias centers on zero for sensitive topic surveys but that responses are very unreliable or noisy. The biasing effects of these response errors for analysis (e.g., on estimates of correlations, regression, transition probabilities, and means) are examined both theoretically and using computer simulation. Simple and complex statistics that describe relationships or change are biased by unreliability as are the statistical inferences. Several strategies for neutralizing the effects of response errors on statistics and inferences are examined. Record checks and reinterviews yield satisfactory corrections in many situations, while internal consistency and instrumental variable approaches were less robust. Randomized response and multiplicity techniques were not effective because they do not address response unreliability.