Fake

Testing Instruments Used by Forensic Psychologists Criticized as Junk Science

Written on Tuesday, March 3rd, 2020 by T.C. Kelly
Filed under: Expert Opinions, In the News, Working with Experts

Psychologists and other mental health professionals give helpful testimony in a variety of contexts. In civil cases, they may testify about the emotional trauma experienced by an accident victim. In family law cases, psychologists determine the fitness of parents seeking child custody. In workers’ compensation cases, they provide opinions about the degree of disability caused by job-related emotional injuries.

In criminal cases, mental health experts often provide evidence that will help a sentencing court decide upon an appropriate punishment. In death penalty cases, their testimony might help a jury understand whether a defendant is likely to commit another violent crime.

While mental health experts play a vital role in the legal system, their testimony is often criticized as inexact. Proper testing of DNA can establish identity to a near certainty, but mental health experts have no comparable tools. Physicians rely on objective evidence to make a diagnosis, including CT scans and MRI results, while mental health experts are more likely to rely on subjective impressions when they identify a mental health condition.

Assessment Instruments and Subjectivity

To reduce subjectivity in forensic psychological assessments, experts have developed instruments that help them make a diagnosis. Those tools allow psychologists and other expert witnesses to base opinions on objective research findings rather than subjective impressions.

Subjective conclusions may reflect unconscious bias. They may also reflect an opinion that would not be held by a different professional conducting the same evaluation. To the extent that an assessment instrument is both valid and reliable, the instrument may help forensic experts achieve more consistent results.

Despite the advantages of using assessment tools to inform an expert opinion, a 2014 study found that a quarter of all forensic evaluations are conducted without using an assessment instrument. Experts who regularly eschew tools typically trust their professional judgment more than evidence-based assessment methods.

While using an assessment instrument may contribute to the reliability of an expert opinion, not all instruments are created equal. The criteria chosen for measurement may be based on a consensus of subjective opinion rather than an objective analysis. In addition, instruments often call for the assessor to answer subjective questions. Different psychologists administering the same test might therefore reach markedly different results.

For example, the Hare Psychopathy Checklist, a screening tool to determine whether a patient should be classified as a sociopath, asks whether the subject has “excessive glibness” or superficial charm. Two different assessors might disagree about the amount of glibness that is “excessive.” What seems to be genuine charm to one might seem superficial to another. It isn’t surprising that the tool has been harshly criticized, despite its widespread acceptance in the mental health community, as relying on criteria that are “subjective, vague, judgmental and practically unmeasurable.”

Validity of Forensic Psychology Instruments

Tess Neal, an assistant professor of psychology at Arizona State University, led a study of testing instruments commonly used to provide an objective foundation for expert opinions rendered in court. Legal scholars teamed with mental health experts to examine assessment tools commonly used by expert witnesses. The study’s findings will likely fuel Daubert challenges while providing ammunition for challenging opinions on cross-examination.

The study examined 30 assessment tools “to determine their popularity among experts and their scientific credibility.” Neal and her colleagues assessed a variety of instruments, including “aptitude tests (e.g., general cognitive and ability tests), achievement tests (e.g., tests of knowledge or skills), and personality tests.”

The study found that only about two-thirds of popular assessment tools are generally accepted as reliable in the field of psychology. It also determined that there is only a “weak link” between general acceptance of a tool’s reliability and its actual reliability.

Actual reliability was determined by whether the instruments received “favorable reviews of their psychometric and technical properties in authorities such as the Mental Measurements Yearbook.” Only about 40% of popular assessment instruments have been favorably reviewed.

Some tests, such as the Static-99 (a sex offender risk assessment tool) are generally accepted as reliable despite the absence of any professional reviews. Others, such as the Structured Inventory of Malingered Symptomology (SIMS), are generally accepted despite having largely unfavorable reviews. The assumption that an instrument is reliable seems to be detached from evidence-based research.

The authors report that psychological testing is a large and profitable business. Yet it is not always true that “psychological tests published, marketed, and sold by reputable publishers are psychometrically strong tests.”

According to the study, “some psychological assessment tools are published commercially without participating in or surviving the scientific peer-review process and/or without ever having been subjected to scientifically sound testing—core criteria the law uses for determining whether evidence is admissible.” The mental health experts who use an instrument may be unaware that it has never been peer-reviewed or validated with testing.

Failure to Challenge Assessment Instruments

The study also noted that lawyers have done a poor job of challenging the reliability of assessment evidence. Judges and lawyers tend to accept the evidence without question.

The study’s key finding is startling: “Challenges to the most scientifically suspect tools are almost nonexistent. Attorneys rarely challenge psychological expert assessment evidence, and when they do, judges often fail to exercise the scrutiny required by law.”

The study found that lawyers challenged the admissibility of only 5% of expert opinions that were based on the surveyed assessment instruments. The majority of those challenges addressed how the expert used the tool (i.e., whether the expert followed the instructions correctly) or whether the expert interpreted the results correctly.

A more fundamental challenge would address the validity of the instrument itself. Daubert requires expert opinions to be based on adequate facts and a reasonable methodology. If an assessment tool has not been determined by peer-reviewed studies to produce reliable results, opinions that are driven by the tool may be ripe for a Daubert challenge.

When validity challenges are made, they often fail. Judges base decisions on the evidence and arguments presented at a Daubert hearing, so it may be unfair to criticize judges for failing to recognize the weaknesses of assessment instruments that have not been validated.

Still, the study found that courts sometimes view test results as only one fact among many that informs the expert’s opinion. If that fact is unreliable, however, Daubert would prevent an expert from using the test result as support for an opinion. Since reliance on a testing instrument bolsters a psychologist’s subjective opinion with data that is supposedly objective, a jury might be swayed by unreliable test results, even if the jury might not be persuaded by the expert’s testimony in the absence of those results.

The study’s “bottom-line conclusion is that evidentiary challenges to psychological tools are rare and challenges to the most scientifically suspect tools are even rarer or are nonexistent.” Effective representation of a client may require lawyers to raise Daubert challenges to opinions based on psychological assessment instruments, even if the instruments are widely used.

Using Experts to Challenge Experts

When one party calls a mental health expert to testify, it is nearly always imperative for the opposing party to use its own expert to challenge that testimony. Professor Tess’ study provides a means for experts to challenge opinions that are based on the findings of popular assessment instruments.

Michael Saks, a professor of law with ASU’s Sandra Day O’Connor College of Law, stresses the importance of challenging the credibility of psychological evidence. Challenging biases that are inherent in assessment instruments is an important means of assuring that juries do not place undue weight on opinions that are only loosely grounded in science.

Professor Saks hopes that the study will encourage expert witnesses to be skeptical of their own testing instruments. Professor Neal agrees that psychologists need to be more introspective by challenging their own assumptions about the validity of their tools. At the very least, experts should be prepared to acknowledge the limitations of their findings and to admit that psychological opinion evidence can never be entirely free of subjectivity.

 

About T.C. Kelly

Prior to his retirement, T.C. Kelly handled litigation and appeals in state and federal courts across the Midwest. He focused his practice on criminal defense, personal injury, and employment law. He now writes about legal issues for a variety of publications.

About T.C. Kelly

Prior to his retirement, T.C. Kelly handled litigation and appeals in state and federal courts across the Midwest. He focused his practice on criminal defense, personal injury, and employment law. He now writes about legal issues for a variety of publications.