Advertisement

The lawyer had a gleam in his eye. He had backed me into a corner—or so he thought.

“So, doctor, you just said that the article you referenced indicates that only 20 percent of patients with this disease die a sudden cardiac death, correct?”

“Yes.”

“And your testimony is given under the standard of more likely than not, right?”

“That is correct.”
 

Judy Melinek, M.D.
Forensic Pathologist, Alameda County Sheriff Coroner’s Office

“That’s the same thing as a balance of the probabilities—more than 50 percent likelihood, correct?”

“Yes.”

“Yet less than half of patients with this disease die of the cause you are advocating. How can you possibly testify that it is more likely than not?”

I have faced versions of this question over the years from countless attorneys and I can’t always tell whether they are truly confused by statistics and probability or whether they think I am. They are employing an unsound argument called the ecological fallacy: applying a statistical finding generated by a large population to an individual case. For people who have no understanding of math it “makes sense.” For an expert, these types of questions are an opportunity to educate the judge and jury—and, perhaps, even the questioning attorney—about the science of statistical power and how probability really works.

Courts require experts to testify “within reasonable scientific certainty.” Sounds legit, right? It isn’t—not to scientists.

"Reasonable scientific certainty” holds no currency in the scientific community. The National Commission on Forensic Science was a federal body created in 2013 to address conflicts of professional culture between the law and the sciences. Before its charter expired in April 2017, the NCFS released a document that called for the cessation of the use of the term “reasonable scientific certainty” in forensic expert testimony. There is no agreement among experts of the actual meaning of the phrase. For one thing, nothing in science is based on reason alone; it’s based on evidence and testing. 

Science is rarely certain—it relies on statistical probability, and acknowledges that outliers can and do exist. We forensic scientists operate in a liminal space between science and law. In the past, we have had to accommodate our professional rhetoric to the demands of attorneys by accepting the use of this phrase when we testified. Why have we done so? Because the cost of rejecting this accommodation was to open ourselves to attack. An opposing lawyer would declare that we have no credibility as an expert, and the judge might dismiss our testimony as not adhering to evidentiary standards. If you can’t testify to something with “reasonable scientific certainty” because the evidence was insufficient, the court might throw out all your testimony rather than allowing you to testify to the uncertainty of the science.

Okay, I hear you: this is absurd! Uncertainty is not a sign of poor scientific testimony—it’s the hallmark of the honest scientist! Our paradox rises from the United States Supreme Court’s Daubert decision, which limited the bounds of scientific opinion testimony by expert witnesses. Daubert v. Dow (1993) set the federal standard for admissibility of evidence. It never required “reasonable certainty,” but it did set guidelines for testimony to be admissible if the science is reliable. What did the Supreme Court say is reliable? Reliable expert testimony in science and technical fields must be tested and subject to peer review, must have a known error rate, must be maintained by standards, and must be accepted by the scientific community.

As a consequence of Daubert, forensic science disciplines that rely on inference and experience (such as forensic pathology) have been subjected to accusations of unreliability because practitioners have no published error rate and they incorporate ancillary evidence—such as information from witnesses or police—which can be inaccurately perceived as generators of cognitive bias. While testimony about the likelihood of a particular event being within a 95% confidence interval may be appropriate when describing epidemiological research or while performing lab studies on fruit flies, these types of statistics have no bearing on the day-to-day work of forensic disciplines such as pathology and criminalistics. Our branches of science rely on training, experience, observation and scientific inference. 

There is ample inferential literature to support our observations, but not enough statistics to allow us to report on our own error rate. We don’t operate in the zone of experimental science. We can’t run double-blind tests on murdered human beings. We can’t generate fatal industrial accidents to study the mechanical dynamics at play. And so, thanks to the Daubert ruling and the persistent repetition by lawyers of the magical phrase “reasonable scientific certainty,” we have to sit up there on the stand and teach juries about statistics and probability, about inductivism and the scientific method. 
Humility is baked into scientific semantics. There are things that are not knowable based on the current state of your field of specialty. But if you are asked to answer a question on the stand and the answer is just “I don’t know” then some lawyer will find an “expert” with no credentials or integrity to follow you and declare “I know!” In my experience, juries and lawyers will prefer the expert who is willing to speak with confidence and certainty.

They will defer to the voice who “knows” even if it is not based in good science. It’s not good enough to say “I don’t know.” If you care about justice prevailing you have to also take the time to explain why you don’t know what you don’t know. You have to explain the limits of your science.
The stakes are high. In many states, once a defendant is convicted it is not possible to appeal based on scientific advancement or factual innocence, but only on procedural grounds: that the original trial judge or attorneys erred in some way. Expert testimony that had been deemed reliable by the courts in the past is now being questioned by scientists because the science has advanced, but the new data can’t be used to free those who were incarcerated based on expert witness testimony that is now obsolete or was overstated.

The future of forensic science in the United States is in flux, and scientific literacy is becoming harder to come by in the courtroom and outside it. Nowadays one of the most daunting challenges an expert has to face is to convey uncertainty without appearing unqualified. It is an unfortunate tenet of our nature that we human beings gravitate toward the person who can exude conviction with charisma, no matter his or her actual base of experience. Couple that with the testimonial result of the Dunning-Kruger effect—that the expert with the least experience and qualifications is more likely to testify with absolute certainty—and it becomes even more critical that we forensic professionals train ourselves to express clearly the limits of scientific testimony when the evidence in a case just isn’t there. 

As individuals with integrity we have to apply rigorous scientific principles in our reports and our testimony, and we have to acknowledge that uncertainty exists. Your training and experience is reliable. Your professional opinion (even when you are uncertain) is reasonable. Be certain that attorneys, judges and juries get that. 

Dr. Judy Melinek, M.D. is a forensic pathologist who performs autopsies for the Alameda County Sheriff Coroner’s office in California. Her New York Times Bestselling memoir “Working Stiff: Two Years, 262 Bodies, and the Making of a Medical Examiner,” co-authored with her husband, writer T.J. Mitchell, is now out in paperback. She is also the CEO of PathologyExpert Inc.

Advertisement
Advertisement