How to Get Comfortable with AI in Digital Forensics

by Jared Barnhart, Head of Global Engagement & Community at Cellebrite

Artificial intelligence in digital forensics is getting a lot of attention right nowand let’s be honest, much of it is negative.

The concerns are reasonable. Trust, validation, and defensibility are non-negotiable.

Yet there’s a broader reality that needs acknowledging: digital forensics has scaled faster than the resources required to support it. Labs are overwhelmed. Investigators are undertrained in digital evidence.

AI is here whether we like it or not. It’s changing the way investigators and forensics teams are being asked to work and using it isn’t really optional.

It’s alright to feel uncomfortable with AI, but the opportunity is huge and forensics teams will need to learn how to get comfortable with it.

Where can AI actually help

Investigative workflows have quietly shifted into a pattern of extract, decode and hand off.

At the end, when it’s time to testify, there’s often a pause. “Why can actually speak to this evidence?” That’s the problem.

AI is not a replacement for experts; it’s an enabler of scale.

Agencies typically encounter two to five devices per case, multiplying review volume, evidence correlation steps, and the risk that important information remains hidden. Add to that the multiple communication channels, data platforms, agencies, and stakeholders involved in that case. Then take it a step further with investigators handling multiple investigations at a time.

It’s no surprise that over two-thirds of investigators say the biggest challenge is the time it takes to review the digital data.

Where AI can make a big difference is allowing investigators to ask questions of the data, non-digital forensics experts to explore evidence, and teams to surface leads faster. The mountain of digital evidence in every case can seem daunting, but it can allow forensics teams to crack cases they were never able to before, faster than they were able to before, when using AI in the right way.

Human oversight is still key

Digital forensics is a science, and humans must always be in the loop.

The stakes are too high. Investigators are dealing with cases involving CSAM, organized crime, and other high sensitivity cases that have a huge impact on victims and their families. The decision on these cases cannot be definitively made by AI.

Experts must validate and testify to the trust. And if AI is being used, it must be transparent and clearly auditable to show how a decision was come to, before then being verified by a human.

The opportunity here isn’t another automation. It’s amplifying curiosity. The same curiosity that solves cases in the real world should exist in the digital one.

Image investigators digging through data, finding key insights and then bringing those findings back to forensic experts to validate and stand behind.

AI is a sidekick to support investigators, helping to sift through vast volumes of digital evidence, for example, but given the stakes of investigations, humans are still vital to the forensic process and decision making.

That’s the future. AI-assisted discovery. Human-validated truth.

Data quality is vital

When AI is used in investigations, the data behind it is absolutely critical.

It’s the same as any other investigative method. If you get a degraded DNA profile, then it’s unusable. That’s no different from the data that AI is trained on.

Equally, the chain of custody still applies. From evidence data acquisition to analysis to the point where the outcome is shared, there still needs to be a clear audit trail and accountability. If anything, AI raises the stakes because any errors spread at a much larger scale.

As part of that, there needs to be standardization of data inputs. Inconsistent data formats across agencies and devices can skew AI outputs and make review times longer, which has the opposite intention, so data practices need to be standardized to improve the reliability of AI.

And before an AI model should be trusted for live cases, outputs should be tested against datasets where the truth is already known. That way, investigators can have confidence in the tools they are using and the outputs that they are getting.

Understanding and tackling data bias

As AI tools are being used more and more, it’s also important to understand the potential bias that comes with them. Models trained on historical data may bring prejudice into investigations, even if unintended.

We’ve seen that already with cases using live facial recognition, with people being wrongly identified due to biased training data.

Again, it reinforces the need for humans in the loop. Investigators who understand bias are better equipped to evaluate AI outputs critically, and decide what value they do provide, rather than accepting them at face value when they could contain bias or inaccuracies.

Bias accountability should be an ongoing factor when using AI tools. It should become part of regular governance on a regular basis, rather than something only done when the tools are first adopted.

There’s a lot of concern about AI and how it will filter into the investigative space, and being uncomfortable with it is natural.

But in a world where smartphones are involved in 97% of criminal cases and digital evidence data is growing all the time, getting comfortable with AI, to support automation and overseen by humans, will be a huge time saver and opportunity for investigators.

About the author

Jared Barnhart is the Customer Experience Team Lead at Cellebrite, a global leader in premier Digital Investigative solutions for the public and private sectors. A former detective and mobile forensics engineer, Jared is highly specialized in digital forensics, regularly training law enforcement and lending his expertise to help them solve cases and accelerate justice.

Subscribe to eNewsletters

Stay up to date on the forensic industry with the latest news, cold cases, technologies, webinars and more delivered straight to your inbox.