LONDON, UK - MAY 7, 2015: Rotating sign outside the headquarters of London's Metropolitan Police in Westminster. Senior officers in the force are based in this building. (Credit: BasPhoto/

American judges, juries, lawyers and forensic scientists have been wrestling with the presentation of evidence and testimony in courtrooms.  

Their forensic science counterparts in Britain are also tackling many of the same issues, from the broad-based to the hyper-detailed.

The House of Lords’ Science and Technology Committee has an ongoing inquiry into “forensic science,” announced in July.

The latest testimony in the inquiry comes from a group of academic experts, from the Queen Mary University of London, the University of Edinburgh and the Alan Turing Institute. Together, they contend that “matches” and “identification” are improperly presented in the proceedings of justice.

Such conclusions are especially apt in low-template DNA samples and genetic mixtures, as well as partial forensic traces left at crime scenes (all of which echoes the arguments in America).

“The statistical aspects of forensic science are often either simply overlooked (because they are considered too difficult) or poorly presented by both lawyers and forensic scientists,” said Norman Fenton, of Queen Mary’s School of Electronic Engineering and Computer Science. “Errors can and do occur at every level of evidence evaluation: sampling, measurement, interpretation of results, and presentation of findings.”

“Matches” are no such thing, they argue—especially when it comes to partial profiles of DNA or fingerprints that cannot “identify” a person, argue the scientists.

Everyone involved in the system—from judges to lawyers to juries—should have some training in statistical analysis so they can better understand probabilities and error rates, according to the experts. Law schools and forensic classes are the best place to start providing that training, they said.

The establishment of Bayesian networks—total error rates of a sequence of pieces of evidence taken together, each impacting the overall probabilities of an overall complex prosecution—would be key, said Fenton.

“These tools can enable criminal investigators to explore the impacts of different assumptions,” added the computer scientist.

The first priorities for funding forensic research would be for the Bayesian networks, as well as experimental forays into the prevalence of DNA transfer. Another acute need is interdisciplinary research among different forensic fields, add the academics.

But, at the same time, some of the experts inveigh against the use of machine learning and automated algorithms—because they too may make mistakes.

“The outcome of (of automated analyses) is often treated as truth, whereas, in reality, mistakes may be common,” said Primaz Skraba, of Queen Mary’s School of Mathematical Sciences. “In the past, one could rely on population studies being normally distributed. However, for these new methods, the error rate is often difficult to assess.”

One of the techniques particular to the U.K. that drew the ire of the critics was the use of the Metropolitan Police’s Gangs Matrix, a computer system to “predict” criminal affiliation and behavior created in 2012 at least partly in response to the London riots of the previous year. The computer models are biased, the critics said.

“Risk scores generated by police algorithms are shared with multiple agencies and this results in often stigmatic and punitive repercussions for the individual involved,” said Skraba. “The issues raised are not restricted to privacy implications and data protection compliance, but to the role and legitimacy of the criminal justice system as a whole.”

As aforementioned, the arguments echo much of the current debate in the United States. For instance, the debate of “identification” and “matches” based on trace evidence interpretation has led to the rollout of a whole new set of “uniform language” regulations issued by the Department of Justice this year.