“Junk science” has been the rallying cry of lobbyists for the insurance and pharmaceutical industries. The term has largely been used to condemn expert evidence offered by plaintiffs in civil suits. While the claims that plaintiffs base cases on “junk science” have been largely overblown — the claims are intended, after all, to minimize the opportunity of juries to evaluate evidence of corporate negligence — there were a few well-publicized cases in which bad science may have influenced verdicts in civil cases. The Daubert revolution was a judicial and legislative response to those cases.
Since outrage about junk science has been carefully nurtured by corporate lobbyists, it has focused on expert evidence presented by plaintiff’s lawyers in civil cases. The outraged attacks upon experts tend to overlook questionable science that is funded by and relied upon by industries and their insurers to avoid liability.
Until recently, even less attention was paid to junk science advanced by prosecutors in criminal cases. If Daubert has value, judges should apply it consistently to all expert evidence, regardless of the side that offers it and regardless of whether the evidence is offered in a civil or criminal case. Yet judges routinely allow prosecutors to present testimony by forensic scientists that unbiased experts recognize as junk science.
Forensic Science Reliability
Slate recently called attention to a scientific paper that it deemed worthy of greater media attention. The paper (Misuse of Scientific Measurements in Forensic Science by Itiel Dror and Nicholas Scurich) discusses error rates in forensic science.
Error rates are a poorly understood factor in the application of the Daubert standard. Daubert demands that experts employ reliable scientific methodologies. A methodology that has a high error rate should generally be rejected as unreliable. While it is easy to understand that the reliability of a methodology is a function of how often the methodology produces an accurate result, the measurement of error rates to validate a methodology is less intuitive.
Dror and Scurich point out that the error rate for many forensic science methodologies is unknown. Crime lab employees often cover up that deficiency by claiming complete confidence in their results. Confidence, however, is not a substitute for science.
Fingerprint examiners, for example, often tell juries that the science of fingerprint comparison is infallible. As Dror and Scurich explain, there is no such thing as an error rate of “zero,” despite improper testimony to that effect. In fact, they cite a study demonstrating that the same expert comparing the same fingerprints on two separate occasions will reach a different result about 10{d61575bddc780c1d4ab39ab904bf25755f3b8d1434703a303cf443ba00f43fa4} of the time.
Well prepared defense attorneys may be able to counter claims that fingerprint comparison is infallible with examples of mistaken fingerprint identifications that police agencies have relied upon in the past. The question, however, is whether the examiner should be permitted to testify at all — and whether a defendant should be placed at risk of a wrongful conviction — if the examiner can’t cite an error rate to prove that identifications are nearly always reliable.
Dror and Scurich lament that judges have often admitted the opinions of forensic science experts who rely on methodologies that “have no properly established error rates and even when experts have implausibly claimed that the error rate is zero.” How can a judge regard a methodology as reliable when the judge has no idea how often the methodology returns an erroneous result?
Error Rate Determinations in Forensic Science
Dror and Scurich argue that forensic sciences have difficulty measuring an accurate error rate because they classify opinions that a methodology returned an “inconclusive” result as correct. Rendering the opinion that a comparison is inconclusive does not mean that the opinion is correct.
Assume, for example, that nine of ten fingerprint examiners exclude the defendant as the source of a fingerprint on a pane of glass. If the tenth examiner testifies that the comparison is “inconclusive,” the examiner is likely wrong. Yet that incorrect opinion will be deemed “correct” in an analysis of error rates.
Crime lab employees too often have a bias in favor of prosecutors and police officers who are hoping for a particular result. When they know the police are hoping for a ballistics match that they cannot find, they may decide that the comparison is “inconclusive” to avoid damaging the prosecution’s case. If no match can be made, the opinion is wrong.
Since “inconclusive” results are not reflected in forensic science error rates, error rate computations by forensic scientists are skewed toward making the methodology seem more reliable than it actually is. As Dror and Scurich argue, “not ever counting inconclusive decisions as error is conceptually flawed and has practical negative consequences, such as misrepresenting error rate estimates in court which are artificially low and inaccurate.”
Lessons Learned
Defense attorneys should consider Daubert challenges whenever a prosecution is based on the testimony of a forensic scientist. The failure to rely on a methodology with an acceptable error rate may be a fruitful basis for challenging the admissibility of an expert opinion. Defense lawyers should also consider retaining their own expert for the purpose of educating the judge or jury about the danger of relying on error rates that count “inconclusive” results as if they are always accurate results.