Study: AI Sentencing Tools Need to be Closely Scrutinized

  • <<
  • >>
565178.jpg

 

In a paper published by the Behavioral Sciences & Law journal, experts from the University of Surrey take a critical look at the growing use of algorithmic risk assessment tools, which act as a form of expert scientific evidence in a growing number of criminal cases.

The review argues that because of several issues, such as the biases of the developers and weak statistical evidence of the AI's predictive performance, judges should act as gatekeepers and closely scrutinise whether such tools should be used at all.

The paper outlines three steps that judges should consider: 

  • Fitness, this is to consider whether using the AI tool is relevant to the case at all
  • Accuracy, this is to understand whether the tool can truly distinguish between reoffenders and non-reoffenders
  • Reliability, this would require the judges to scrutinise the trustworthiness of tool's outcomes in practice. This stage would not be required if the judge found the AI lacking in the one of the first two steps.

"These emerging AI tools have the potential to offer benefits to judges in sentencing, but close attention needs to be paid to whether they are trustworthy. If used carelessly these tools will do a disservice to the defendants on the receiving end," said Melissa Hamilton, author of the paper and professor of Law and Criminal Justice at the University of Surrey.

The paper is available open access here. 

Republished courtesy of University of Surrey. 

Related Categories