Algorithms were supposed to remake the American justice system, but data can discriminate, says Ngozi Okidegbe, an expert on data and the law.

Championed as dispassionate, computer-driven calculations about risk, crime, and recidivism, their deployment in everything from policing to bail and sentencing to parole was meant to smooth out what are often unequal decisions made by fallible, biased humans.

But, so far, this hasn’t been the case.

“In theory, if the predictive algorithm is less biased than the decision-maker, that should lead to less incarceration of Black and Indigenous and other politically marginalized people. But algorithms can discriminate,” says Okidegbe, associate professor of law and an assistant professor of computing and data sciences at Boston University. Her scholarship examines how the use of predictive technologies in the criminal justice system affects racially marginalized communities.

As it is, these groups are incarcerated at nearly four times the rate of their white peers. According to the Bureau of Justice Statistics, an arm of the US Department of Justice, there were 1,186 Black adults incarcerated in state or federal facilities for every 100,000 adults in 2021 (the most recent year for which data are available), and 1,004 American Indians and Alaska Natives incarcerated for every 100,000 adults. Compare these to the rates at which white people were incarcerated in the same year: 222 per 100,000.

In recent papers, Okidegbe has studied the role of algorithms in these inequities and the interwoven consequences of technology and the law, including researching the data behind bail decisions.

Read the full article about criminal justice algorithms by Molly Callahan at Futurity.