The premise that an effective digital forensic examiner must be able to validate all of the tools that he or she uses is universally accepted in the digital forensic community. I have seen some less-educated members of the community champion a particularly insidious, and I will argue, invalid method of tool validation, often referred to as the two-tool validation method. The premise of this method is that if two different tools provide the same result, they must both be correct. The problem with that assumption is that it ignores the fact that both tools may have the same flaw. This may be due to unforeseen changes to operating systems, or file systems, or it may simply be the result of invalid, but widely accepted assumptions. Few practitioners would suggest testing a gas chromatograph by merely seeing if two of them produced the same result; they would insist on testing a calibrated sample and ensuring that the results of the test matched the known values of the sample used.
Digital forensic tools cover a much broader spectrum of data recovery and presentation of recovered data, and are therefore correspondingly difficult to test. Nonetheless, we cannot hold them to a lower standard. Errors and limitations in the recovery and presentation of data may occur as a result of oversight, design flaw, or simply because the ability to interpret every data structure that exists in the real world is an intrinsically unattainable goal.
From: Training is Not Enough: A Case for Education Over Training by Tim Wedge