Advertisement

Fingerprint examiners have historically been required to claim absolute certainty that a specific print belongs to a specific suspect. Less-than-certain fingerprint evidence is, therefore, not reported at all, no matter its potential importance to the case. Statistical models offer a way to use less-than-certain print evidence in court.Statistical models offer a way to use less-than-certain print evidence in court.

Some fingerprint evidence found at crime scenes that may be forensically valuable is currently not being introduced in court due to the way print evidence must be reported.

Fingerprint examiners have historically been required to claim absolute certainty that a specific print belongs to a specific suspect. Less-than-certain fingerprint evidence is, therefore, not reported at all, no matter its potential importance to the case.

Now, new ways of forensically considering less-than-certain evidence are emerging. Researchers are developing statistical models that quantify fingerprint evidence, opening ways for the inclusion of less-than-certain evidence in the identification process.

Making the Grade
Cedric Neumann is one of the researchers. Neumann, a professor of statistics at South Dakota State University, believes there is no scientific justification for the popular numerical point standard in use. Instead, he is trying to bring the benefits of probabilities to forensic science.

For a decade Neumann has been developing a fingerprint analysis technique designed to help forensic experts grade the quality of crime scene fingerprints and calculate the probability that a print found at a crime scene was left by a particular person. The model was published recently by Neumann and coauthors Ian Evett and James Skerrett in the journal Statistics in Society (Vol 175, Issue 2, April 2012).

“By developing the statistical model and testing it on hundreds of real casework comparisons, we have generated data that support the claims made by latent print examiners over the past 100 years,” Neumann said.

More importantly, Neumann has shown that latent/control print comparisons with a certain set of features in agreement may carry stronger evidential value than other comparisons, even when those other comparisons have more features in agreement.

“This indicates that the evidential value of every comparison needs to be assessed based on its own merits, which supports a holistic approach,” Neumann said.

Data generated by the Neumann model has already been used to support the admissibility of fingerprint evidence in court during several Daubert-Frye hearings.

“The data was well received by courts, demonstrating that some criticisms of the [fingerprint] community, which may have been justified in the past, were no longer current,” Neumann said. (See: Minnesota v. Terrell Dixon - File No. 27-CR-10-3378; and People of the State of Illinois v. Robert Morris; No. 11 CR 12889-01, in Cook County Circuit Court, Illinois.)

David Kaye, a law professor at the Penn State Dickinson School of Law, said that it seemed only a matter of time before the Neumann model or a similar system becomes a routine supplement to, if not a replacement for, the current regime of subjective thresholds.

"Neumann, Evett, and Skerrett have devised an impressive automated procedure for taking minutiae as discerned by fingerprint examiners on crime scene marks, deriving a score based on relative positions and other data, and generating a likelihood ratio for the hypothesis of a common versus a disparate source," Kaye said. "They suggest that the statistical procedure ultimately can overturn the dominant paradigm of categorical testimony of absolute certainty."

Such a paradigm shift should be welcome in Anglo-American legal systems. The legal apparatus has been stirring for more qualified conclusions than the usual assertion that a print mark could not possibly have come from anyone else on Earth.

Indeed, in 2009 the National Academy of Sciences conducted a study of forensic science as practiced in the U.S., under authority of the Science, State, Justice, Commerce, and Related Agencies Appropriations Act of 2006. The resulting report (Strengthening Forensic Science in the United States: A Path Forward. Washington, DC: The National Academies Press, 2009), part of which examined the protocol used to examine prints made by friction ridge skin known as ACE-V (acronym for Analysis, Comparison, Evaluation, and Verification), concluded: "We have reviewed available scientific evidence of the validity of the ACE-V method and found none."

Likelihood Ratios
There are, however, issues in ascertaining and communicating the uncertainty in Neumann’s likelihood ratios related to probability models.

"It takes discipline to present likelihoods and likelihood ratios so that they will not be misconstrued as posterior probabilities or odds," Kaye said.

Furthermore, Kaye said, British courts have expressed misgivings about likelihood ratios, especially when based on subjective probabilities (see R. versus T., 2010, EWCA Crim 2439).

Nevertheless, there is ample precedent for the use of likelihood ratios, primarily from DNA cases with mixed samples and from child support, immigration, and criminal cases that rely on kinship testing, Kaye said.

"Describing a suitably computed likelihood ratio is a reasonable way to assist legal fact-finders to draw their own inferences," Kaye said.

However, before numerical likelihood ratios enter the courtroom, the scientific validity of the particular procedure for computing them must be established by showing sufficient testing and published validation research or by showing general acceptance in the scientific community.

"There are no hard-and-fast rules about how much research is enough," Kaye said. "The very publication of Neumann’s work in this journal—without ensuing controversy in the statistical community—will go a long way toward meeting these standards.”

Driving Force
The driving force behind probabilities modeling is the desire of each developer to measure the weight behind conclusions. The people conducting this research are performing the research on their own; there is no federal financing at this time.

"The desire to quantify the uniqueness of fingerprints is not a new endeavor," said Michele Triplett, chair of the Probability Modeling & Training Subcommittee at the International Association for Identification, and forensic operations manager for the King County Regional Automated Fingerprint Identification System in Seattle.

Individuals have been trying to numerically represent fingerprint information as far back as Sir Francis Galton in 1892. Initial models were rough estimates at best and dozens of improved models have been proposed over the last 100 years.

"Each new statistical model proposed has been an improvement but has still failed to accurately represent real situations when dealing with small amounts of information," Triplett said.

This has not necessarily been a waste of time.

"This research has been very useful when dealing with large amounts of level two detail and has probably lead to the current technology utilized by AFIS systems," Triplett said.

Triplett said an additional item to consider is that different probability models may have different goals.

"Models may be looking at the probability of a similar configuration of detail, the probability that a latent print was made by a specific individual, the probability that the conclusion is correct, or may be giving a likelihood ratio that compares two different probabilities," she said. Researchers can develop statistical models to represent a variety of events and outcomes and it’s possible that each model is valuable for the situation it is designed to assess, Triplett said.

"When assessing smaller amounts of information, such as latent print impressions, no statistical model that I'm aware of has been able to incorporate all the information considered by a human practitioner, including clarity, level 1, level 2, level 3 detail, the rarity of each feature, the quantity of each type of detail, the same number of intervening ridges between features, the spatial relationship between features, and any dissimilarities," she said.

Still, Triplett said, it's possible that a statistical model can give accurate results without considering all the elements a human might consider.

"With the technology and large databases in place today, it seems likely the ability to measure the weight of results is very close at hand," she said.

Douglas Page writes about forensic science and medicine from Pine Mountain, California. douglaspage@earthlink.net

Advertisement
Advertisement