As an applied statistician, sensitivity analyses are among the less glamourous parts of my job. A researcher asks “could you just check that…” and umpteen figures and tables later we have the (usually unsurprising) answer that no, this or that particular issue, doesn't change the findings of the work. But despite the lack of glamour, sensitivity analyses are an important part of research. This blog looks at a number of things. First, it examines how sensitivity analyses are used to explore assumptions that are made during statistical analyses. Second, it looks in more depth at one approach that can sometimes be helpful. Finally, it provides an example of a sensitivity analysis leading to additional, and surprising, insights in the overall research.
First, Assume Nothing
A sensitivity analysis allows you to test and explore empirically some of the assumptions that underlie the findings in a piece of work. These might be assumptions about a relationship, or about data that has been excluded, or about which part of an analysis is most important. The basic idea goes like this — change the analysis approach to make sure you get the same findings even with a different method, change the data to see if you get the same findings even if some feature were different, try something else out, and make sure you aren't surprised by what you see…
The remainder of this commentary is available at statisticsviews.com.
Catherine Saunders is an analyst at RAND Europe.
This commentary originally appeared on Statistics Views on May 1, 2015. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.