The Forensic Sciences' Toxic Entanglement with the Myth of Objectivity

  • <<
  • >>

573476.jpg

 

by Chaunesey Clemmons*, MA, and Allysha P. Winburn**, PhD, RPA, D-ABFA

“Scientific objectivity.” It’s a concept as old as the Enlightenment and a mainstay of mid-20th-century approaches to science, thought to be a core tenet of forensic scientific analysis and testimony. It’s also a myth—and it’s dangerous.

With roots in 17th-century scientific approaches, the theories of objectivity and subjectivity are traditionally presented as dichotomous and opposite. Objectivity holds that scientists can distance themselves from the objects of their study, collecting data on concrete phenomena without influence from their own backgrounds, experiences, or values. Subjectivity holds that scientists, themselves human, are inherently embedded in whatever cultural context shaped their theories and values, and that those theories and values necessarily infuse their scientific conclusions. In light of the 2009 National Academy of Sciences report and other calls for standardization and error quantification of the forensic sciences, it should be unsurprising that forensic scientists maintain the ideal of objectivity as a goal for their analyses. After all, forensic scientists, unlike many other scientists, testify to their results in a court of law, a context in which certainty and lack of error have traditionally been paramount.

The problem is that scientific objectivity doesn’t actually exist. And researchers in disciplines from cognitive neuroscience to cultural anthropology have known that for decades.

A commitment to the pursuit of objectivity characterized most scientific endeavors through the mid-20th century. This approach, often referred to as “positivism,” maintained that if scientists simply used an empirical approach to test hypotheses about a universal reality from which they held themselves separate, natural laws would emerge that explained the phenomena under study—be they laws of physics, chemistry, or human social interactions. Starting in the 1970s, however, philosophers of science like Roy Bhaskar and Bruno Latour began to question the idea that scientists could separate themselves and their opinions from the experiments they were conducting and the conclusions they were drawing. Science is a social process, they argued, since all human knowledge is, necessarily, constructed by humans. “Facts” and “laws” don’t simply exist to be discovered by objective scientists; rather, the scientists are themselves the agents that create their conclusions. This production of scientific knowledge is culturally embedded and context-specific. Further, the explicit and implicit values of scientists affect the conclusions they make and influence their interpretations of those conclusions. This is the idea of “theory-laden” data: that all scientific conclusions are laden with the theories, ideas, and beliefs that scientists bring to the table (or, more appropriately, to the lab bench).

Through the end of the last century and into the beginning of this one, scientists in many disciplines began to be swayed by research coming from fields like Social Studies of Science and Science and Technology Studies indicating that true objectivity is impossible to achieve and scientific data are necessarily theory laden. Anthropologists, historians, philosophers, and other social scientists modified their scientific approaches to transparently acknowledge their own subjectivities, admit the potential for explicit or implicit biases to color their analyses, and allow for the possibility of multiple valid interpretations of the same data.

Still, the forensic sciences resisted. Then came the wave of research by cognitive neuroscientists and psychologists focusing on the possibility for cognitive bias to influence not just science in general, but the forensic sciences in particular. In publication after publication, scholars like Itiel Dror showed that implicit biases (the biases we don’t even know we have) influence observations and conclusions in forensic science disciplines including anthropology, DNA, document examination, fingerprint analysis, nursing, pathology, and toxicology. Whether the object of study is a human bone, a mixed-contributor DNA sample, or a latent fingerprint, experts’ interpretations vary based on the expectations of the experts and the context of their analyses. In particular, receiving contextual information can influence a forensic scientist’s conclusions. Knowing that a suspect confessed, for example, might sway an expert to perceive a match between that suspect’s data (e.g., DNA, fingerprints) and samples recovered from a crime scene. Knowledge that a skeletonized decedent was found with clothing or other material evidence might influence an anthropologist’s estimate of biological sex. The effects of these biasing influences impact not only the conclusions (i.e., the interpretation of the data), but also what the data are (because the biases also impact sampling, testing strategies, etc.). These effects are further heightened when the quality of the evidence being examined is fragmented, degraded, or otherwise not pristine; when methodological inadequacies are present; and/or when the data are ambiguous or difficult to interpret.

What emerges from these decades of multi-disciplinary research is a clear consensus: true scientific objectivity is not possible to achieve. The dichotomy of objective vs. subjective further exacerbates the myth of pure objectivity. Some forensic domains are indeed more subjective than others, creating a continuum, but even at the far end of this continuum, pure objectivity does not exist. Forensic scientists have even developed effective tools of quality control to overcome the effects of biases by constraining subjective influences with rigorous analytical protocols. These tools, however, remain underused. Meanwhile, the myth (and the unattainable goal) of objectivity persists in many forensic science disciplines. This is a problem for several reasons.

Portraying forensic science analyses as completely objective can be seen as optimistically inaccurate and naïve at best, and dangerously misleading at worst. If we imply that our results are facts rather than interpretations, we contribute to the misconception by jurors and other members of the public that forensic science findings are infallible, running counter to post-Daubert, post-NAS calls for error and uncertainty analyses, and espousing an outdated, positivist idea of “Science” (with a capital S) as monolithic and scientists as omniscient. The rhetoric of objectivity can lead to miscarriages of justice, if an expert’s opinion is viewed as a scientific certainty worthy of more credence than other forms of evidence.

The pervasive myth of objectivity poses other threats. In the current sociopolitical climate, in which issues of racism and social injustice are at the forefront of our consciousness, it is common to hear “scientific objectivity” framed as an excuse not to engage with the human side of our analyses. Due to well-documented inequities in social systems worldwide, forensic casework is disproportionately practiced on individuals from communities of color. And yet, forensic scientists frequently tell each other (and ourselves) that we cannot empathize with these communities, cannot stand in solidarity with them, cannot explicitly reject racism in our teaching, writing, and actions, because to do so is to jeopardize our “scientific objectivity”—a standard which remains both elusive and fictional.

In this context, “objectivity” reeks of privilege, and “remaining objective” veils a deeper goal to maintain the status quo. Those who believe themselves to be “capable” of maintaining complete impartiality, neutrality, and detachment from the individual(s) and evidence before them sit cloaked in the highest room in the tallest tower of the castle of privilege. These “objective” scientists are not the ones on the receiving end of systems of oppression and injustice. Forensic scientists without the privilege to disconnect from the reality of biased and repressive systems endure unrelenting insistence that they need to remain “objective.” While there are many reasons why a forensic scientist may not hold the privilege of “objectivity,” practitioners identifying as Black, Indigenous, Hispanic and Latinx, Asian, and persons of color, in particular, are forced to reconcile their existence as humans affected by oppressive systems and an ability to remove this truth in forensic settings.

Social conditions have forced the very existence of persons of color to embody acts of advocacy and activism, because they have to fight for their lives and their opportunities every day. When forensic science colleagues, mentors, and leaders inform these practitioners that they “can’t be an advocate or an activist and remain ‘objective’” and therefore they “can’t be a forensic scientist, since they are not ’objective’,” they are inherently denied status as legitimate forensic scientists. The experiences of persons of color do not detract from their abilities to practice good science; rather, they allow nuance and a coveted grasp of intersectionality at the forefront of operating within a medico-legal context.

Finally, the very assumption that taking a stance on social issues undermines a forensic scientist’s “objectivity” with “political” advocacy should be dismissed. Asserting the reality of human equality and equity is not—or should not be—political. To wash our hands of engaging with issues of social injustice with the phrase that “science should be objective” and describe the support of historically marginalized groups as a political act seems intentionally naïve and is tantamount to siding with systems of oppression. It distracts from the underlying, positive goal of ensuring that implicit prejudices do not contaminate scientific interpretations. Politicization of identity is designed to maintain the power of those whose identity is deemed neutral. Forensic scientists exist in spaces armed with the potential to aid in dismantling these systems of injustice and should not be dissuaded from doing so by believing in a myth.

If we reject the myth of scientific objectivity, acknowledging both the possibility for our implicit biases and the validity of our emotions and our empathy as effective resources, where do we go from here? Fortunately, forensic scientists, like all practitioners of the scientific method, have many tools to combat problematic aspects of cognitive bias. Quality control practices, protocols like blind analysis, blended-blind analysis, peer review, and the linear sequential unmasking of case-relevant data, have proven effective in reducing the biasing factors to which forensic observers are exposed. Strong methods and standardized analytical protocols can reduce the impacts of biases that do get through this first line of defense.

To develop those methods, forensic scientists can consult a multiplicity of voices, including not only those within but also outside of forensic science, among stakeholders who have important perspectives and access to spaces that are denied to practitioners. Instead of pursuing the imaginary notion of pure objectivity, if we acknowledge the possibility of our (inherently human) subjectivity and make concerted efforts to constrain it, we can approximate a form of mitigated objectivity1 that is more realistic, more humanistic, and more conducive to disciplinary critique—and disciplinary improvement. Importantly, we can do so in a way that does not betray our ethical commitment to advancing social equity. Otherwise, our dispassionate pursuit of “objectivity” itself becomes a subjective position in support of the status quo; we set ourselves for failure by pursing idealized and unachievable goals; and we are not transparent about what the capabilities of our science truly are.

This more realistic and productive approach to forensic science cannot happen while we continue to maintain the myth of pure scientific objectivity. It's time to reject it once and for all so that we can move on to stronger, more defensible, and more ethical forensic science.

*Chaunesey Clemmons holds two BA degrees in Anthropology and Criminal Justice from Florida Atlantic University and an MA in Anthropology from Texas State University. She is co-founder of the Coalition for Equity in Anthropology, an independent group committed to dismantling the intersectional barriers confronted by BIPOC and underrepresented persons when navigating the four fields of anthropology. Her research interests include understanding the intersectionality of identity for biological and forensic applications. 

**Allysha P. Winburn is an assistant professor of anthropology at the University of West Florida. A biological anthropologist with forensic and bioarchaeological expertise, her research focuses on skeletal aging and age estimation, and the ritual use of human remains. Winburn received a bachelor’s degree in archaeological studies from Yale University, master’s degree in anthropology from New York University, and doctorate in anthropology from the University of Florida.    

The perspectives expressed here are solely those of the authors, not any of the institutions with which they are affiliated. 

1This term comes from the work of philosopher of science Alison Wylie.

Photo: The eight sources of bias that may cognitively contaminate sampling, observations, testing strategies, analysis and conclusions, even by experts. They are organized in a taxonomy within three categories: starting off at the top with sources relating to the specific case and analysis (Category A), moving down to sources that relate to the specific person doing the analysis (Category B), and at the very bottom sources that relate to human nature (Category C). Credit: Taken from Dror (2020, page 8000).