The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice

The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice

As governments and corporations use AI more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create AI models too complex for people to understand or they conceal how AI functions. Both champions and critics of AI, however, mistakenly assume that we inevitably face a trade-off: black box AI may be incomprehensible, but it performs more accurately. But that is not so. In this article published in the Cornell Law Review, authors Brandon Garrett and Cynthia Rudin question the basis for this assumption, which has so powerfully affected judges, policymakers, and academics. They describe a mature body of computer science research showing how “glass box” AI—designed to be fully interpretable by people—can be more accurate than the black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. Thus, Garrett and Rudin argue for national and local regulation to safeguard, in all criminal cases, the right to glass box AI. By Brandon Garrett and Cynthia Rudin.