Brandi Carl and Dianna Balderrama sued Johnson & Johnson after discovering that they suffered from ovarian cancer. Carl and Balderrama attributed their cancer to their use of Johnson & Johnson Baby Powder. After losing several trials, J&J has stopped marketing the product while continuing to insist that it is not carcinogenic.
Carl and Balderrama’s cases were selected as the first two cases to be tried in multi-county litigation in New Jersey. Johnson & Johnson moved to exclude the opinions of their two causation experts. The trial court granted that motion and then entered summary judgment in favor of J&J. The Appellate Division of the Superior Court of New Jersey reversed the judgment.
The court’s opinion is noteworthy not just for its careful analysis of the evidence, but for its thorough discussion of the admissibility of expert testimony in toxic tort cases. While the insurance industry has condemned nearly all expert testimony offered by plaintiffs in toxic tort cases as “junk science” — and while some judges have echoed that hostility to plaintiffs’ experts — the Appellate Division’s dispassionate analysis is a model for how courts should apply the Daubert decision in cases involving allegedly dangerous substances.
Daubert in New Jersey
In 2018, the New Jersey Supreme Court analyzed the state’s rules of evidence governing the admissibility of expert testimony in civil cases. The court adopted the Daubert factors for assessing the reliability of expert testimony and incorporated them into New Jersey law.
The Appellate Division emphasized that the reliability of a methodology does not depend on whether the trial judge agrees with the expert’s conclusions. The focus is on “the level of intellectual rigor” that the expert displays. If the expert’s methodology is based on sound principles and the expert applies those principles to relevant data in a reliable way, the expert’s testimony is admissible, whether or not the judge is persuaded by the expert’s opinions.
Application of Daubert to Epidemiology
The precise cause of a disease can rarely be determined with certainty, but certainty is not the standard of proof in civil cases. The plaintiffs only needed to prove that asbestos contaminating the talc in baby powder probably caused their ovarian cancer.
Experts typically determine whether exposure to an agent caused a disease by reference to epidemiological studies. Courts regard epidemiological studies as reliable when they reveal an association between an agent and a disease and when the association is probably not the result of a limitation in the study, such as a sampling error.
All of the experts in the case agreed that valid epidemiological studies include cohort studies, which compare exposed and unexposed people over a period of time, and case-control studies, which compare the exposure of people who have acquired a disease to a control group of people who did not. Both types of studies can yield relevant information and neither is necessarily superior to the other. Statistical methods, including a pooled analysis or meta-analysis, may help experts draw conclusions when individual studies are in apparent conflict.
To raise an inference of causation, a study must produce a relative risk (or odds ratio) of more than 1.0, which implies that the association is greater than chance would produce. Study results must also be statistically significant.
When studies permit an inference of causation, experts then decide whether the association reflects an actual causal connection. Experts often rely on the Bradford Hill criteria to distinguish mere association from causation. The appellate court’s thorough review of those factors ends with Hill’s admonition that absolute certainty should never be required to demonstrate causation because science by its nature is based on incomplete knowledge. Experts offer their best understanding, not a perfect understanding.
The Appellate Division noted that experts should be advocates for the truth, not for a party. Experts must therefore consider the entire body of scientific research rather than cherry-picking research results that support the expert’s opinion. Experts are entitled to reject significant evidence that does not support their opinion but they must offer a reasonable explanation for doing so.
Daniel Cramer’s Expert Opinion
A lengthy section of the court’s opinion reviewed the scientific literature upon which the experts for both parties relied. The court noted that neither party claimed the studies were based on unsound methodologies, that they misstated the results, that they evidenced bias, or that they were otherwise unworthy of consideration by the scientific community.
The court then discussed the opinions offered by the plaintiff’s experts. Daniel Cramer, a professor of obstetrics, gynecology, and reproductive biology at Harvard Medical School, has studied the relationship between genital powders and ovarian cancer for many years.
Based on his literature review and his own research, Cramer concluded that the odds ratio for women who used talc-based powders and those who did not was 1.29. He concluded that the odds ratio was statistically significant.
Cramer applied the Bradford Hill criteria and explained his disagreement with other experts about the application of certain factors. He acknowledged shortcomings in the literature, including the inability to standardize a measurement of the amount of powder that women applied or the amount that entered the body. He concluded that recent literature approximated that information by asking subjects about the frequency with which they used powder. That information permitted a dose response analysis that some courts consider to be critical evidence of causation in cases involving unsafe drugs and products.
In addition to expressing the opinion that talc-based powders can cause ovarian cancer (general causation), Cramer assessed the likelihood that talc-based powder caused the ovarian cancers with which Carl and Balderamma were diagnosed. He considered specific risk factors, including obesity, genetic history, and use of oral contraceptives. He assessed the two women in light of studies that most closely matched the factors associated with each woman. While acknowledging the possibility of alternative causes, he identified the use of talc-based powders as the most likely cause of the ovarian cancer that each woman acquired.
The Court’s Correct Understanding of Relative Risk
Cramer explained his disagreement with industry opinions that a relative risk of less than 2.0 is insufficiently strong to create an inference of causation. He noted that no scientist has ever expressed that opinion. As long as bias and other causes of error can be ruled out, there is no magic odds ratio that creates a threshold for inferring causation.
Cramer’s opinion, it should be noted, is contrary to the position adopted by some courts, including a significant number of federal courts. Those courts have concluded that a relative risk of less than 2.0 cannot prove that causation is “more likely than not.” Those courts often confuse general causation, which asks only whether a substance can cause an illness, and specific causation, which asks whether the substance probably caused the plaintiff’s illness.
As a paper for the National Academies of Science explains, judicial insistence on a relative risk of at least 2.0 is based on false assumptions, proving once again that science should be left to scientists. Even a low risk is a risk. General causation does not ask whether a substance probably caused a disease but whether it is capable of causing the disease. The “more likely than not” standard of proof is relevant to specific causation but not to general causation.
Other Expert Opinions
The court evaluated the expert opinions of Graham Colditz, an epidemiologist, on general causation. His expert report reviewed the literature and concluded that genital talc use can cause ovarian cancer.
Colditz carefully explained why he gave greater weight to studies that took a stronger analytic approach than studies that were analytically flawed. Colditz agreed with Cramer that the magnitude of risk need not reach 2.0 to support an inference of general causation.
The court also considered the expert opinions of John Godleski, a Harvard Medical School professor of pathology. He analyzed tissue samples from Carl and Balderrama. He concluded that the tissues contained substantial amounts of talc.
Curtis Omiecinski, a professor of molecular toxicology at Penn State, explained how talc in baby powder can migrate from the perineum to the ovaries. He also explained how talc can cause inflammation that triggers the development of cancer.
The court’s thorough review of the expert reports and the underlying literature convinced it that the trial court erred in excluding the testimony of the plaintiffs’ experts. The experts based their opinions on a significant number of reliable studies. They provided reasonable explanations for giving greater weight to some studies than others. They did not misinterpret the studies or give undue weight to a small subset of studies.
The experts anchored their opinions on reasonable scientific evidence that provides a plausible explanation of the mechanisms by which talc in baby powder enters the body and causes ovarian cancer. They carefully applied the Bradford Hill criteria in reaching opinions about general causation. They met the defendants’ objections to their reasoning with reasonable answers.
The court concluded that the methodologies used by the plaintiffs’ experts were reasonable. The cumulative data used in the studies upon which they relied was sufficient to support their opinions. Attacks on the quality of that data raised questions of credibility which were for the jury to assess.
The trial court erred by weighing the defense evidence against the plaintiffs’ evidence. Competing expert opinions should be weighed by juries, not judges. The trial judge’s preference for cohort studies over case control studies did not find evidentiary support in the record and was contrary to New Jersey precedent.
The trial court’s contention that the plaintiffs’ experts relied on a “made for litigation methodology” missed the point. The question is whether the methodology is reliable. The plaintiffs’ experts used methodologies that are generally accepted as reliable by scientists in their field and based their analysis on sufficient data. That is all that Daubert requires.
Since the trial court overstepped its role by accepting the opinions of defense experts as more credible than those of the plaintiffs’ experts, the court erred by excluding the plaintiffs’ expert testimony. Since summary judgment was based on the exclusion of that evidence, the summary judgment was reversed and the case was remanded for trial.