By Brandon Garrett and Cynthia Rudin
Today, as data-driven technologies have been implemented across a wide range of human activities, new warnings have been issued from a wide range of sources, academic, public policy, and government, regarding the dangers posed by artificial intelligence to society, democracy, and individual rights.
The Federal Trade Commission (FTC) has described more detailed views concerning unfair and deceptive practices that rely on AI and impact consumers, and the FTC has taken action against a series of corporations regarding different types of algorithms. Several pieces of legislation that would regulate algorithms have been introduced in Congress, none of which has been enacted, but meanwhile, states have been active in considering and also adopting legislation regarding uses of AI. The White House Office of Science and Technology Policy (OSTP) has called for an “AI Bill of Rights.”
Our statement responds to the OSTP call for submissions on that topic and we focus specifically on uses of AI in the criminal system. We write to reflect our own views as researchers, respectively, in law, scientific evidence, and constitutional law more broadly, and in artificial intelligence, machine learning, and computer science more broadly.
We write to emphasize two basic points, that (1) artificial intelligence (AI) need not be black box and non-transparent in the ways in which it affects criminal procedure rights, and in fact, nothing will be lost by requiring such transparency through regulation; and (2) while more rights protections and regulations should be considered, far more can and should be done to apply and robustly protect the existing Bill of Rights in the U.S. Constitution as it should apply to uses by government of AI in the criminal system, particularly when AI is used to provide evidence regarding criminal defendants.
Read Garrett and Cynthia Rudin's full statement to the White House below.
Brandon Garrett is the Director of the Wilson Center and the L. Neil Williams Professor of Law at Duke University School of Law. Cynthia Rudin is a professor of computer science, electrical and computer engineering, statistical science, and biostatistics & bioinformatics at Duke University, and directs the Interpretable Machine Learning Lab.