"Nothing Is More Opaque Than Absolute Transparency"
The Use of Prior History to Guide Sentencing
Published in: Harvard Data Science Review, Volume 2, Number 1 (January 2020)
Rudin, Wang, and Coker (2020, henceforth RWC) present a convincing argument against black box algorithms like COMPAS that are sometimes used in the United States to help judges sentence convicted offenders in court. They point out that the lack of transparency means that defendants (and victims) cannot assess the accuracy of the score driving the decision, and researchers cannot accurately assess the fairness of any given decision rule. They then argue that these black box algorithms should be replaced with simple and transparent risk assessment algorithms that are based primarily on age and criminal history. Prior work has shown that these tools perform at least as well as the more costly proprietary tools.
Although the article is focused narrowly on proprietary risk tools like COMPAS, RWC's argument potentially has a much broader scope. As they discuss in their final section, the RWC arguments that black box algorithms are not fair applies not only to COMPAS, but also to all discretionary sentencing done by judges. RWC state:
Interestingly, a system that relies only on judges—and does not use machine learning at all—has similar disadvantages to COMPAS; the thought processes of judges is (like COMPAS) a black box that provides inconsistent error-prone decisions. Removing COMPAS from the criminal justice system, without a transparent alternative, would still leave us with a black box.
Prior work has shown not only that judges are 'black boxes.' but that they also not very good at identifying high-risk offenders (Gottfredson, 1999). An extension of the RWC argument then, if I might be allowed to take the argument to an extreme that RWC did not advocate, would replace all black box algorithms, including judges, with a simple risk tool that uses age and criminal history to assign sentences.