How and why artificial intelligence is being used in the criminal justice system is coming under increasing scrutiny. Research has exposed the way applications intended to remove human bias are instead exasperating existing inequalities within the system. One of the drivers of this problem is the gap between the decision-makers who are trying to use AI, algorithms, and machine learning technologies in their processes and the product teams who develop the applications.
Harvard Law School’s Chris Bavitz shares insights from his work as part of the Ethics and Governance of Artificial Intelligence Initiative, which is looking at how we can help bridge this gap through the judicious use of new regulatory and governance models.