- Crime Lab
- Crime Scene
- Death Penalty
- Digital Forensic Insider
- Digital Forensics
- Evidence Collection
- Expert Forensic Voices
- Forensic Anthropology
- Forensic Psychology
- Impression Evidence
- Medical Examiner
- Mobile Forensics
- Police Procedure
- Sexual Assault Investigations
- Witness Testimony
Banishing backlogs and budget cuts with “Foresight”
Backlogs. Budget cuts. Legislative pressure to perform. The world of forensic science is fraught with challenges that extend beyond mere science. Increasingly, forensic labs are being asked to do more with less, a frustrating situation for professionals tasked with providing sensitive data that relies on quality testing. How can forensic professionals harness the performance data they need to both improve their own work and communicate their needs to a larger audience? Enter Foresight.
In 2003, the Quadrupol1 study examined the practices of four European forensic labs in order to uncover inefficiencies and develop best practices. Building on that work, in 2006 West Virginia University began the Foresight project: a multi-year evaluation of budgetary and performance data from 15 forensic labs. Funded by a grant from the National Institute of Justice (NIJ), the intense study required participating labs to provide extensive data on every budget item associated with their lab plus share all casework. The goal was to evaluate a forensic lab much like we would any other economically driven business, but with one important difference. As public service entities, forensic labs are motivated by different incentives than a traditional market-based business. These labs were chasing performance, not profit, and being asked to do more and more work with far fewer resources. The question became one of economics and efficiency: How could the labs maximize output while minimizing staffing needs, test duplication, and other cost centers?
Today, with 64 ISO 17025:2005 and/or ASCLD/Lab Certified labs participating, the Foresight project can provide fairly specific answers to that question. These laboratories represent a broad spectrum that includes national, statewide, regional, and metropolitan jurisdictions; single laboratory and multi-facility systems; public and private labs; and participation from five continents. Based on caseload and financial information, the project offers insight into how many personnel are required for a “right-sized” organization. Study participants have used the Foresight performance tools to power through their backlogs in record time, and some have leveraged the data collected to fend-off budget cuts and even increase funding. Participants have also learned best practices from other participants, who were ‘best in class.’ Knowledge, in this case, truly is power: Power to serve more with less and power to communicate this enhanced performance to those responsible for approving ongoing, much needed resources.
As part of the Foresight research team, we owe quite a bit to our first group of participants. While today we use a powerful-but-simplified two-page data collection instrument we affectionately call “LabRAT” (Laboratory Reporting and Analysis Tool), in the beginning we asked our participating labs to provide pages upon pages of data for our initial evaluation. It was an onerous task, and we are forever grateful they were up to it. It helped us refine what we needed to collect, focusing on key metrics rather than the entire universe of information. It also exposed some unexpected areas for improvement, most notably in the area of language.
The Language Barrier
When we first began collecting data for Foresight, we ran into a few unexpected problems. When we talked about the most basic elements, such as a “case” or “test,” we quickly realized that everyone in the room had a slightly different definition. As you can imagine, this posed some problems for accurate data collection. It was enlightening to learn that standard definitions didn’t exist. We spent more than a year-and-a-half working out exactly what every term meant to be sure we were all speaking the same language. Our group of 15 or so scientists and lab directors from all across the United States and Canada were infinitely patient and willing. Sitting together every few months to discuss our goals, we were able to create succinct and agreed-upon definitions that formed the basis of our research data collection.
Developing Best Practices
Once we had definitions in place, we dove into the issues at hand. These issues were no different than the challenges that persist today: Reducing backloads, making sure each lab group had enough but not too many professionals to do the job at hand, ensuring the best mix of scientists/analysts/ support staff, and dealing with generational and other interpersonal issues.
As a group we devised our first test instrument, incorporating budgetary measures by area of investigation, personnel statistics, training and tenure data, and casework data that include information on case type, number of test samples, and more. Our goal was to evaluate what constitutes best practices. This exhaustive process helped illuminate the key areas that affect performance and can be efficiently adjusted to provide an almost immediate, positive impact. This knowledge helped us create the much more concise collection tool we use to this day, LabRAT.
As today’s participants enter data into the LabRAT spreadsheet, it calculates pertinent metrics the labs can use and track on a regular basis. This in itself is illuminating for some labs that may have access to the data through their LIMS systems but have rarely had the time to collect the information in such a coordinated and interlocking way. But what we find most interesting—and what draws many labs to the project—is the way our participant labs are using the data, how they are changing practices.
Backlogs are a great example. Given the data we’ve collected over a wide sample population, we can quite confidently talk about where the greatest efficiencies lie and how to achieve these in the most expeditious way. We know where labs are performing particularly well. Take DNA for example. We found that once a lab corrects inefficiencies in workload, individuals in the lab from non-DNA areas of investigation often have additional downtime. Labs may use that downtime to train these folks to triage DNA casework, which leads to much better throughput in the growing demand for DNA casework. The cost is minimal—you’re not adding any staff—and suddenly you’re working through backlogs so quickly that you have additional time to focus on other cases. Given the large group of labs we’ve worked with, we have a good idea what size is optimal and can share this with each other to improve our work.
In addition to exposing areas for improvement, Foresight participating labs also enjoy the benefits of seeing where they fit within the larger group. This in and of itself can be enlightening. In some cases, a lab will find they may not be able to improve their efficiency beyond their current level. The data may show the lab is more costly than average in a particular area, but external factors like low crime rates (e.g., less need for a particular type of test) or higher compensation requirements due to geography have a significant impact. Uncovering this type of result is not necessarily a bad thing; however, lab personnel may want to think about outsourcing certain types of cases to adjust for this effect.
When it comes to cost, the Foresight model was designed to overlook nothing. When we talk about the cost of doing something, we look at everything from equipment, telecommunications, heating, lighting, facility rent … everything. If a participant doesn’t have access to the data, we can estimate those costs from other labs in our studies. We come up with an all-inclusive figure that tells participants what it costs to process a case. This leads to informed decisions. Take trace evidence cases, for example. You might find that processing one trace evidence case costs the same as processing two, three, or even four traditional DNA cases. While trace evidence is wonderful and powerful, if DNA alone will get you where you need to be, this cost factor will heavily affect your decision-making process. Foresight is not about cutting where it matters. It’s about using resources wisely so that labs can do more and enhance the services they provide. Once you know the key metrics, you can make informed decisions.
The results bear this out. For example, Foresight applies a testing intensity measure that provides a proxy measure for quality control. Generally, applying more tests in the same type of case is highly correlated with a reduction in risk or an increase in quality control— you are trying to reduce errors. However, you can test too much. Some labs found they were using legacy protocols that required they perform a presumptive test to determine if a sample was blood. But they quickly found that the next test performed both proved it was blood and added new information to the profile, so the presumptive test was unnecessary duplication. Testing too many times in a particular case takes away personnel, time, and funding that could be used elsewhere. Examining what you do and comparing it to what others do, essentially best practice benchmarking, can lead to some simple yet potent fixes.
Other labs have used the data collected via Foresight to justify budget increases or other improvements. In one of the more dramatic examples, a Colorado lab was able to secure an entirely new facility, and the Foresight metrics offered convincing evidence in support of its capital project. It’s hard to argue with numbers. Our labs, and the labs’ stakeholders, understand that by paying attention to business processes you can do more with what you have now, and when you request additional resources you can empirically prove it is to meet a real need and not because of any lingering inefficiency.
Having access to information is power. From backlogs to budgets, understanding and communicating your needs based on facts and backed up with figures helps forensic lab personnel achieve their goals and avoid potential pitfalls.
Interested in participating in the Foresight study?
Interested participants may e-mail Paul Speaker at email@example.com for details. Public and private labs are welcome. Participants are required to share budgetary data regarding personnel, capital, consumables, and other areas, plus submit casework data. Participant names are not shared publicly, allowing labs concerned with privacy to maintain confidentiality.
- European Network of Forensic Science Institutes. (2003). QUADRUPOL.
Paul Speaker, Ph.D., is a faculty member of the West Virginia University Finance Department. He holds a Ph.D. and M.S. from Purdue University and a B.A. from LaSalle College. Dr. Speaker also holds the position of Chief Executive Officer of Forensic Science Management Consultants, LLC, a firm that specializes in the business of forensics using the forensics of business.
Tom Witt, Ph.D., is a faculty member of the West Virginia University Economics Department. He also serves as director of the WVU Bureau of Business and Economic Research and is an associate dean for research and outreach in the WVU College of Business and Economics. He holds a Ph.D. and M.A. from Washington University (St. Louis) and a B.A. from Oklahoma State University.
Dr. Witt presented his findings from the Foresight project at the 2011 International Symposium on Human Identification (ISHI). Dr. Speaker will host a Foresight presentation at ISHI 2012, which takes place October 15–18 at the Gaylord Opryland Resort in Nashville, Tennessee (ishinews.com).