Watchdog Report Says Algorithms Strengthen Forensic Analysis

  • <<
  • >>

577721.jpg

 

A new report from the Government Accountability Office (GAO), the audit institution of the U.S. federal government, has found that—despite internal and external challenges—algorithms strengthen forensic analysis, overall. Still, the government watchdog prepared three policy options to directly address the challenges related to law enforcement use of forensic analysis.

The report breaks algorithms down into three kinds: latent print, facial recognition, and probabilistic genotyping.

Latent print algorithms, which compare latent prints from crime scenes to large databases, is the most mature technology of the three, and probably considered the least controversial. Accuracy is markedly consistent across the technology, with the exception of poor-quality prints (both latent and known). While GAO did point that out as a limitation, it is a well-known—and not all that unusual—challenge that forensic analysts have learned to work around.

Facial recognition technology and probabilistic genotyping algorithms are controversial in different ways, but ultimately come down to the same element—trust.

Facial recognition technology is in the public eye much more often than the other two algorithms. Opponents of the technology have continually pointed to it as an infringement of personal rights. Additionally, there have been studies published that suggest facial recognition technology is racist, showing higher rates of false positives for Asian and Black faces. In June 2020, IBM, Microsoft and Amazon all announced they would not sell their facial recognition software to police departments.

“We’ve decided we will not sell facial recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology,” Microsoft's president and chief counsel Brad Smith said at the time.

Probabilistic genotyping is not as well known to those outside of the forensic, law enforcement and criminal justice communities, but policymakers have been taking aim at it for years. Most recently, a bill from Rep. Mark Takano, D-Calif., asks two questions: 1) Are the source codes (not algorithms) that power proprietary probabilistic genotyping software considered trade secrets? and 2) Does the public have a right to this source code to ensure equal and fair treatment? Thus far, the courts have unequivocally answered yes to the first and no to the second. Regardless, the issue is still making its way through the court system as the two leading platforms—TrueAllele and STRmix—continue to assist in cases throughout the country.

Three-tiered policy

Given that background, it’s unsurprising all three of the GAO’s recommendations correlate to reducing improper use and increasing public trust.

1. Increased training

The GAO advises the implementation of a training program or certification of analysts to help “increase consistency and reduce risk of improper use across various federal and non-federal labs and law enforcement agencies.” For latent print and facial recognition specifically, training on cognitive biases could raise awareness and improve objectivity.

2. Standards and policies on appropriate use

The creation of standards and policies for forensic algorithms could help not only reduce improper use, but also increased consistency and public confidence.

For example, “standards for testing and performance of facial recognition algorithms, for example, could help to reassure the public and other stakeholders that algorithms are providing reliable results,” the report reads.

GAO acknowledges that standards creation can be resource-intensive, and will require the input and service of many groups from both the public and private sectors.

3. Increased transparency

In an effort to improve public trust, GAO suggests “officials provide access to the results of testing, and to information about data sources, how algorithms are used, and for what types of investigations.” Interestingly, this is already done for probabilistic genotyping software. The algorithms for both TrueAllele and STRmix have been published in peer-reviewed literature; however, TrueAllele has refused to release its source code, filing—and winning—trade secrets claims.

“An algorithm describes a procedure. A programmer writes in a computer language, translating the algorithm into source code text. A compiler turns the text into executable software that runs as a smartphone, laptop or other computer app. Algorithms are shared, software is tested. Since software pirates can easily copy text files, trade secret law protects source code confidentiality,” says Mark Perlin, founder and CEO of Cybergenetics, the company behind TrueAllele. “You don’t learn how a car works by reading its blueprints; you take it for a test run.”

For facial recognition algorithms specifically, GAO offers two opportunities: identify which software versions are used in testing to help improve public confidence and help agencies choose algorithms, and make more data sets publicly available for facial recognition algorithm training and testing to minimize demographic effects.

Photo credit: GAO

 

Subscribe to our e-Newsletters
Stay up to date with the latest news, articles, and products for the lab. Plus, get special offers from Forensic – all delivered right to your inbox! Sign up now!