Author Archives: T.C. Kelly

About T.C. Kelly

Prior to his retirement, T.C. Kelly handled litigation and appeals in state and federal courts across the Midwest. He focused his practice on criminal defense, personal injury, and employment law. He now writes about legal issues for a variety of publications.

Code

Will AI Replace Expert Witnesses?

Fans of science fiction are familiar with the role that Artificial Intelligence (AI) might eventually play in our daily lives. Even in the present, machines are being programmed to simulate intelligence by learning, adapting, and making reasoned decisions that respond to new situations.

Self-driving cars create the illusion of intelligence because they use algorithms to make decisions based on constantly changing data. Doctors depend on AI to help them diagnose health conditions, a task that machines may eventually be able to perform more accurately than physicians and without their assistance. Electronic tutors help children learn by assessing whether a student is bored or struggling and by changing lessons to meet the student’s needs.

Some have suggested that lawyers and judges will eventually be replaced by AI counterparts. Legal reasoning might even by improved by removing bias and political leanings from the decision-making process. 

Jurors cannot be replaced by machines without amending federal and state constitutional guarantees of jury trials. Whether it would be wise to remove human emotion from the process of rendering a verdict probably depends on how lawyers feel about humans.

Expert Witnesses and AI

Matthew Robert Bennett and Marcin Budka have been “investigating the potential for AI to study evidence in forensic science.” Their results have been mixed. Focusing on the ability of AI to analyze footprint evidence, they found that the AI “was better at assessing footprints than general forensic scientists, but not better than specific footprint experts.”

Footwear imprints are commonly found at crime scenes. The FBI contends that “footwear impression evidence often provides an important link between the suspect and the crime scene.”

Forensic investigators gather shoeprint or footprint evidence at crime scenes. Forensic experts then offer opinions about their source. Scientists warn that prints found at crime scenes are often of low quality and that methods used to take impressions of the prints may change their characteristics. “For this reason, there will always be some uncertainty concerning whether a suspect’s shoe truly matches the crime scene print, or if the match is simply a false positive.”

Training an AI to Be an Expert

Footprint experts may be able to determine a suspect’s height, weight, and gender from the size of his or her footprints. However, Bennet and Budke determined that podiatrists have a 50{d61575bddc780c1d4ab39ab904bf25755f3b8d1434703a303cf443ba00f43fa4} success rate when they determine gender based on footprints — the same success rate that guessing would produce. 

Bennet and Budke trained an AI to perform the same task. The AI arrived at a correct conclusion about gender 90{d61575bddc780c1d4ab39ab904bf25755f3b8d1434703a303cf443ba00f43fa4} of the time.

Shoeprint experts rely on experience and databases to identify the make of a suspect’s shoe. Bennet and Budke concluded that true experts are rarely mistaken in their identification. Unfortunately, there are few true footwear experts in the UK, where Bennet and Budke conducted their study.

Forensic investigators and police detectives make use of the UK’s extensive database of footwear and shoeprints. They are much less successful than experts at identifying footwear from shoeprints. Bennet and Budke tried to determine whether an AI could do a better job.

They trained a second AI to identify the make and model of footwear from black and white photographs of shoeprint impressions. They ran several trials with casual database users and discovered that their success rate varied from 22{d61575bddc780c1d4ab39ab904bf25755f3b8d1434703a303cf443ba00f43fa4} to 83{d61575bddc780c1d4ab39ab904bf25755f3b8d1434703a303cf443ba00f43fa4}. When they ran the same trials with their AI, the success rate varied from 60{d61575bddc780c1d4ab39ab904bf25755f3b8d1434703a303cf443ba00f43fa4} to 91{d61575bddc780c1d4ab39ab904bf25755f3b8d1434703a303cf443ba00f43fa4}. 

The AI was better than database users who were not among the UK’s elite footwear experts, but still had a significant error rate. Actual footwear experts performing the same trial were nearly always right. The lesson learned is that AI isn’t yet ready to supplant human experts, at least when it comes to footwear identification — and that detectives who fancy themselves to be experts are not remotely qualified to render accurate opinions.

AI in Court

Perhaps R2D2 will one day be allowed to testify as an expert witness. At present, courts don’t need to grapple with the ability of an AI to swear an oath. As filtered through human witnesses, however, evidence created by an AI can influence the outcome of trials.

ExpertPages recently discussed the perils of relying on ShotSpotter, a form of AI that uses algorithms to deduce the location of loud noises that it identifies (not always correctly) as gunshots. Even if the accuracy of ShotSpotter were a given, the ability of humans to tweak the results that algorithms produce raises questions about the reliability of ShotSpotter. Investigations by journalists have “identified a number of serious flaws in using ShotSpotter as evidentiary support for prosecutors.”

Apart from the potential unreliability of human witnesses who present the conclusions drawn by AI, the accuracy of an AI’s conclusions depends on the reliability of the algorithm that teaches the AI to “think.” The creators of AI generally claim a proprietary interest in the algorithms, refusing to open them for inspection by an opposing party. That makes “trust us” the common fallback position of human witnesses who explain how an AI reached its conclusions.

In cases that depend on evidence generated by an AI, opposing parties may need to hire an AI expert. The expert will likely have a background in computer science or information technology but might need to work with a second expert who has specialized knowledge of biomechanics, e-commerce, or a variety of other disciplines, such as footwear recognition or acoustics. While the goal of AI is to improve the human condition by allowing computers to arrive at faster and more accurate conclusions than humans, the present reality is that human experts are often needed to testify about the flaws in AI decision-making.

Late Disclosure of Underlying Data Does Not Bar Damages Expert from Testifying

Plaintiffs in wrongful death cases routinely ask expert witnesses to compute the loss of the victim’s anticipated contributions to a family. The experts generally rely on past earnings as part of their analysis. Whether the belated disclosure of that data renders an expert’s opinions unreliable or otherwise requires exclusion of the expert’s opinion was the issue in a recent federal case in the District of Nevada.

Facts of the Case

Nickles and Dimes, Inc., a company that owns and operates amusement arcades in shopping malls, employed Charles Wyman as a route manager. Wyman was electrocuted while servicing a “claw” game machine in Las Vegas. The machine was manufactured by and purchased from Smart Industries Corporation. 

Unbeknownst to Wyman, the claw machine was electrically charged because a power wire and a grounding wire inside the machine were reversed. Wyman was in contact with the electrified machine for about ten minutes before a firefighter unplugged it. Wyman died without recovering consciousness.

Wyman’s estate, several of his surviving family members, and the company that insured Nickles and Dimes filed lawsuits against Smart and other entities for wrongful death. All of the lawsuits were eventually removed to federal court and consolidated. 

Some claims have been settled or dismissed. The remaining liability question is whether the claw machine left the factory in a defective condition. The disputed damages issue relates to lost financial support arising from Wyman’s death.

Expert Report

Terrence Clauretie prepared a preliminary expert report on behalf of the Wyman family members. His report calculated the financial support Wyman would have contributed to his family if he had not died. That report was disclosed within the time set by the court’s scheduling order.

Years after the preliminary report was filed, Smart moved to strike Clauretie as an expert witness. Smart complained that the expert report was incomplete when it was filed because Clauretie did not produce income records in support of the report. Smart also complained that the family members did not provide income records in discovery and that the income information Clauretie reviewed was so vaguely described that Smart could not “discern with certainty” the information he received. 

The family members responded that the preliminary report made clear that it was based on incomplete information and that additional information would be provided later. The preliminary report stated that the final calculation was unlikely to differ significantly from the preliminary calculation. 

After discovery closed, Clauretie received tax information from 2011 to 2013 and prepared a supplemental report that included a final estimate of loss. The final estimate did not differ from the preliminary estimate. 

Smart’s Objections

Smart premised its motion to strike on the argument that a preliminary report does not contain a “complete statement” of all the expert’s opinions as required by Rule 26(a)(2)(B). It also argued that Clauretie’s preliminary report did not include all facts and data he considered or exhibits that would be used to support his opinions.

Smart also objected that Clauretie based his report on statistical data regarding average households rather than Wylie’s actual income. That methodology, in Smart’s review, is unreliable and rendered his opinion speculative.

Smart conceded that Clauretie may have reviewed tax information before he prepared his preliminary opinion but complained that it did not know what information he reviewed. Smart asked the court to strike Clauretie as an expert witness because it could not conduct meaningful discovery of the facts that supported his final opinion.

Reliability Is a “Flexible Concept”

The district court reviewed Clauretie’s opinions for relevance and reliability. Clauretie’s calculation of lost financial support was obviously relevant to an award of damages in a wrongful death case. 

Unlike judges who view themselves as the ultimate authority on reliability, the court recognized that its job was not to determine whether the expert’s opinions are sound but whether the expert used a sound methodology to arrive at those opinions.

The court also recognized that reliability is “a flexible concept.” The court recognized that experts are not required to use a perfect methodology, or even the best methodology. Experts have discretion to choose among various reliable methodologies. Whether that choice affects the reliability of their opinions is for the jury to decide.

The district defined its gatekeeping role as screening out “nonsense opinions.” It is not the court’s job to reject impeachable opinions. Unless the expert “lacks good grounds” for an opinion, it is opposing counsel’s job to expose weaknesses in the expert’s analysis through cross-examination, and it is the jury’s job to decide whether to accept or reject the opinion. 

Reliability of Clauretie’s Opinions

The court rejected the argument that Clauretie failed to identify the facts upon which he based his opinion. The preliminary report identified the substantial information upon which it was based, including Clauretie’s review of tax returns from 2014 and 2015.

The court also rejected Smart’s claim that it never received those tax returns. The returns were attached to a disclosure that the plaintiffs filed but later withdrew after Smart objected that it was filed after the discovery deadline expired. The court did not allow Smart to pretend that it never received the tax returns. Since discovery was later reopened, Smart could have asked to depose Clauretie concerning his reliance on those tax returns. 

The preliminary report indicated that earlier tax returns were not likely to change Clauretie’s opinion significantly. After Clauretie received the 2011 to 2013 tax returns, he prepared a brief supplemental report and confirmed that the additional data did not change his calculation. Nothing in that sequence of events, including the belated disclosure of the 2014 and 2015 tax returns, rendered Clauretie’s opinion unreliable.

Late Disclosure

The court also considered whether the belated filing of Clauretie’s supplemental report justified striking him as an expert witness. Rule 26 requires expert reports to be filed within the time designated by the court. While Rule 26(e) permits supplemental reports, that rule does not create a loophole that permits experts to file incomplete reports before the disclosure deadline.

Rule 37 generally requires the exclusion at trial of opinions and data not included in the report that Rule 26 requires.  The court recognized that the rule nevertheless permits relief from its “harsh requirements” when a failure to disclose was substantially justified or harmless. In addition, the rule authorizes the court to impose sanctions that are less harsh than the exclusion of evidence.

Different judges apply Rule 37 in different ways. Judges who take a mechanistic approach require strict adherence to deadlines and believe that courts should rarely decline to exclude expert evidence when the rule is violated. Judges who believe that cases should usually be decided on their merits by juries, not by judges as a sanction for rules violations, are more inclined to exercise their discretion to avoid harsh results.

The court agreed that the plaintiffs offered no satisfactory explanation for their late production of either set of tax returns. While the belated disclosure might not have been entirely harmless, if only because the delayed resolution of a case is theoretically harmful to the administration of justice, the court decided that “the public policy favoring disposition of cases on their merits and the availability of less drastic sanctions” weighed against striking Clauretie as an expert witness.

While Smart did not benefit from timely production of the tax returns, it did receive the preliminary expert report on time. That report incorporated a number of exhibits and explained how Clauretie arrived at his opinions. Smart had the tax returns before discovery was reopened. It manufactured its own prejudice by failing to take Cluaretie’s deposition so it could question him about the tax returns. Nor did Smart move to compel disclosure of the tax returns.

The court noted that Smart was not required to take Clauretie’s deposition or to move to compel disclosure of documents that Rule 26 requires to be produced. At the same time, Smart was not in a position to argue that it was harmed when it knew of Clauretie’s opinions, had his report, and had the opportunity to depose him. Smart could not claim to have been surprised by Clauretie’s opinions since they did not change from the time the preliminary report was filed. Since Smart received the tax returns before a trial date had even been set, the exclusion of Clauretie’s testimony would be an unduly harsh sanction.

The court decided that a lesser sanction than exclusion was appropriate. It allowed Smart to take Clauretie’s deposition and required the Wymans to pay all expenses associated with it. It also required the Wymans to pay attorneys’ fees associated with the motion to strike.

Lessons Learned

The district court judge wisely avoided a knee-jerk response to belated discovery disclosures. The judge’s ruling was tailored to the lack of harm associated with the late disclosure. 

Lawyers should understand that some judges are more concerned with enforcing their scheduling orders than with producing just results. Experts and the lawyers who hire them should always do their utmost to either (1) produce reports and data relied upon to support opinions by the date set in the court’s scheduling order, or (2) move for an extension of time to produce that data with a showing of good cause for the delay. Violating a rule and hoping that the judge will not impose a harsh sanction is never a good strategy.

Zoom Meeting

The Psychological Impact of Remote Testimony on Expert Witnesses

The latest pandemic surge is causing a new round of trial delays and courthouse closures in some parts of the country. Lawyers can only guess when the “new normal” will arrive. Defining a post-pandemic “normal” may be just as challenging as working during a pandemic.

Remote deposition testimony has become common. It took time for lawyers and court reporters to work the kinks out of remote deposition technology and procedures, but most lawyers have adjusted to the change. Whether remote depositions should continue after the pandemic is a subject of debate. Positions may change from case to case, depending on whether a remote deposition is likely to be helpful or harmful to the lawyer’s client.

Courts have routinely conducted hearings remotely, although permitting a trial witness to testify from a remote location is less common. The federal rule permits remote trial testimony in civil cases “for good cause in compelling circumstances and with appropriate safeguards.” Whether the pandemic constitutes good cause or compelling circumstances is a question that courts decide on a case-by-case basis.

Procedural rules in arbitration tend to be more relaxed than court rules. While the rules that govern arbitration proceedings differ, the most common sets of rules vest the arbitrator with considerable discretion to take testimony remotely. 

Impact on Expert Witnesses

Lawyers view remote testimony in terms of strategy. They might not give much thought to the psychological impact of remote testimony upon a testifying expert. A report by the Berkeley Research Group examines “the possible psychological impact of different ‘hearing’ environments” upon participants in arbitration proceedings, including expert witnesses. BRG is a global consulting firm that provides services to clients in a variety of industries.

The report is based on interviews with participants in arbitration hearings that used remote technology. After spending a few months overcoming glitches in that technology, participants generally developed a favorable impression of remote hearings.

Expert witnesses, in particular, had a positive response to remote hearings. Testifying from the expert’s home or office places experts at ease. The experts believed that the familiar setting allowed them to consider questions more thoughtfully, free from the distractions that are inevitable in a crowded conference room or courtroom.

The report also highlights the perceived benefit of “the additional virtual barrier between the expert witness providing evidence and those tasked with cross-examination.” The report suggests that attempts “to place pressure on and unnerve” expert witnesses were less effective when the lawyer could not confront the witness in a face-to-face setting. It’s difficult to badger a witness from afar.

Experts who testify in person may feel bullied by aggressive cross-examination tactics. While remote testimony may be a disadvantage for lawyers who want to rattle experts during cross-examination, it is fair to ask whether intimidation is a tactic that promotes just results. A witness might be more likely to give thoughtful and reasoned answers when aggressive cross-examination tactics are offset by distance.

On the other hand, comfort cuts both ways. When a cross-examining lawyer cannot intimidate a witness, a better tactic may be to lull the witness into a false sense of security. Remote testimony can seem like a one-on-one conversation between the witness and the lawyer. Witnesses may be tempted to say too much, forgetting the standard admonition to answer only the question that was asked and not to volunteer information.

Preparing the Expert for Remote Testimony 

Experts also noted that in-person depositions are usually accompanied by in-person meetings with the lawyer who prepares them to testify. Experts found that meeting in person results in more thorough preparation. A mock cross-examination is more likely to build confidence when it is conducted in person. Preparing in person also contributes to strong communication and reduces the risk that the expert and the lawyer who retained the expert will not be on the same page when the expert testifies. 

The report suggests that experts and lawyers both benefit from meeting in person to prepare the expert’s testimony. Even when a hearing or deposition is conducted remotely, it may be sensible to meet in person when the time comes to prepare the expert to testify.

Psychologists have identified “Zoom fatigue” as a potential drawback of remote proceedings. Staring at a screen is less engaging than watching witnesses testify in person. An arbitrator’s mind might start to wander after hours of watching an expert give less-than-scintillating testimony about technical issues. Lawyers should keep “zoom fatigue” in mind by breaking up lengthy hearing testimony into a series of shorter answers to help keep the judge or arbitrator engaged.

Impact on Outcomes

Study participants were divided in their opinion about the impact of remote testimony on hearing outcomes. Some believed that the hearing environment has an impact on the decision-making process. Most participants believed that the professionalism of judges, lawyers, and expert witnesses overcomes the disadvantages of remote testimony.

Expert witnesses did not view remote testimony as changing the nature of their testimony or the way that testimony would be perceived. Experts believed they had adapted to remote testimony with ease.

The key takeaway from the BRG report is that expert witnesses benefit from in-person preparation even when they testify remotely. Lawyers may have a better outcome when they meet with an expert witness in person to prepare the expert’s testimony.

Expert Witness typography

Court Requires Dual-Hat Expert to Produce Materials He Created as a Consulting Expert

Some lawsuits repeatedly showcase the nuances of the federal rules governing expert testimony. The multi-district litigation involving C.R. Bard’s mesh products has produced multiple rulings that offer guidance to lawyers who rely on expert testimony in federal court. The latest ruling sheds light on the circumstances under which dual-hat experts must disclose the materials they prepare when they form opinions.

Facts of the Case

Steven Johns is one of thousands of plaintiffs who sued C.R. Bard after suffering injuries allegedly caused by defects in the company’s polypropylene hernia mesh products. Those lawsuits have been joined in multidistrict litigation. Johns’ lawsuit is the first “bellwether” case that will be tried. Last year, an ExpertPages blog discussed the court’s decision to admit the testimony of Johns’ causation expert.

Johns contends that the company’s Ventralight ST mesh device is defective. The mesh was implanted in Johns to repair a hernia. Johns had a gap in the layer of connective tissue called the fascia. The mesh implant was intended to close that gap.

One side of the Ventralight ST mesh is coated. The side with “ST coating” is placed against an organ, such as the patient’s bowels. The uncoated polypropyIene side is placed over the gap in the fascia. 

The coating is intended to delay resorption of the mesh into the organ. The plaintiffs allege that the mesh resorbs too quickly, exposing organs to damage caused by the bare polypropylene. Johns alleges that after his hernia repair, he developed adhesions in a fatty structure associated with the bowel called the omentum. Johns attributes the adhesions to the defective mesh.

Competing Expert Testimony

Bard intends to present the expert testimony of Stephen Badylak, who examined photomicrographs of slides from Bard’s clinical animal study on the Ventralight ST. Bard wants Badylak to testify that the ST coating remained on the mesh device 28 days after it is implanted. The testimony is arguably important because the plaintiffs contend that the coating was generally gone within a week.

The plaintiffs had hired Tamas Nagy to conduct a similar analysis. Nagy reviewed slides from the animal study, took photomicrographs of the slides, reviewed them using a score sheet (as did Badylak), and made notes of his findings. After consulting with Nagy, the plaintiffs represented that Nagy would not offer any expert opinions.

Nagy did not prepare an expert report. On that basis, the defendants moved to strike him as an expert. The parties disputed whether the plaintiffs properly redesignated Nagy as a consulting rather than a testifying expert. The court took no immediate action on the motion to strike Nagy.

In a second supplemental expert report, Badylak advanced his opinion about the presence of ST coating after 28 days of implantation. The plaintiffs challenged the admissibility of Badylak’s opinion. The court determined that Badylak’s methodology was sufficiently reliable to permit his testimony.

The court also allowed the plaintiffs to call Nagy as a rebuttal expert to challenge Badylak’s conclusion that coating remained after 28 days. Based on that ruling, the court denied the motion to strike Nagy as an expert.

Nagy re-reviewed the slides and prepared a rebuttal expert report. When Bard deposed him, he failed to produce materials related to his initial review of the slides, including his photomicrographs, notes, and score sheets. Bard then paused the deposition and moved to compel production of those materials.

Motion to Compel

The court began with the proposition that an opposing party is entitled to production of all materials “considered” by an expert before or while forming an expert opinion, whether or not the expert relies on those materials in the expert’s report. An expert “considers” materials when the expert receives, reads, reviews, or authors those materials, provided that their subject matter relates to the opinions that the expert expresses. The party resisting disclosure bears the burden of showing that the expert did not consider the materials.

Nagy testified that he re-reviewed the photomicrographs before he produced his rebuttal report. He therefore “considered” them in connection with the opinions he formed. There seems to be little dispute that Nagy’s photomicrographs were discoverable.

Nagy authored his notes and score sheets. Since those documents relate to the general subject matter of the opinions Nagy expressed in his rebuttal report, the court began by asking whether the plaintiffs met their burden of showing that Nagy did not consider them in reaching the opinions he expressed.

Nagy testified that he did not need to re-score the score sheets when he formed his rebuttal opinion. The court did not regard Nagy’s testimony as a denial that he considered the original score sheets when he formed the opinions expressed in his rebuttal report. In any event, an expert’s denial that he considered certain data or facts does not control the dispute because the defendants are not required to believe an expert’s testimony. 

The court also rejected the plaintiffs’ assertion that Nagy made the notes and score sheets during his initial review for an unrelated purpose — to examine the degree of inflammatory response in the tissue. Nagy’s rebuttal report discussed inflammatory response as evidence that the ST coating was or was not present on the mesh, suggesting that his first and second review did not address entirely different subjects. Ambiguity about the scope of the first analysis made the notes and score sheets discoverable.

Work Product Objections

The plaintiffs also resisted production of the notes and score sheets on the ground that they are not discoverable “facts and data.” The plaintiffs argued that the notes and score sheets contain Nagy’s interpretation of and commentary about the slides he reviewed and are thus work product that is protected from discovery. 

The court declined to make an expert’s thought processes “categorically undiscoverable.” Nagy testified that the notes represented his visual observations of the slides. Since “visual observation” was the testing process that Nagy employed, the court did not see how else Nagy could capture the presence or absence of the characteristics he observed. The notes and score sheets were therefore discoverable “facts and data.” 

To the extent that the attorney work-product privilege protects communications between an expert and an attorney, it does not extend to an expert’s own development of the opinions he expressed. The attorney work-product privilege protects an attorney’s mental impressions, not those of an expert. Since no attorney’s mental impressions or legal theories were reflected in the score sheets or notes, they were not entitled to work-product protection.

Dual Hat Expert

Perhaps the most meaningful challenge to the production of Nagy’s notes and score sheets was rooted in the notion that Nagy was a consulting expert when he created those materials and only became a testifying expert when it became necessary to rebut Badylak’s report. A non-testifying, consulting expert is generally immune from discovery. Redesignation as a testifying expert does not cause a loss of immunity as to materials that were considered “uniquely” in the expert’s role as a consultant.

The court concluded that the plaintiffs failed to carry their burden of demonstrating that Nagy prepared his original notes and score sheets “uniquely” in his role as a consulting expert. The court did not believe that Nagy could “draw a line in the sand” between information he considered in a consulting context and information he considered when he formed his rebuttal opinions. Given the similarity of the subject matter that Nagy was asked to draw conclusions about as a consultant and later as a testifying witness, and given that his conclusions in each instance were based on a review of the same slides, the court thought it likely that Nagy’s score sheets and notes were not “unique” to the work he performed as a consulting expert. 

Gavel and Stethoscope on Reflective Table

Civil Commitments in North Carolina May Require Expert Witnesses to Act as Lawyers

State laws typically allow the civil commitment of an individual who is dangerous to himself or others because of a mental disease. The procedures that must be followed to secure a commitment order vary from state to state.

When the allegedly dangerous person is brought into the mental health system by a police officer, states typically require a government attorney to represent the interest of the state is seeking a commitment. The person who might be committed has the right to a lawyer and may be entitled to a public defender.

The procedure in North Carolina is unusual. While North Carolina provides a lawyer to the subject of the commitment proceeding, it only requires the state to be represented by counsel when the proceeding is held at a state facility, such as a state-owned mental health hospital. 

North Carolina law gives the attorney general discretion to assign or not to assign a lawyer to commitment proceedings held in private facilities. That quirk in the law effectively forces expert witnesses to make legal decisions about what testimony they should give rather than responding to questions asked by a lawyer.

A recent appellate decision asked whether the subject of a commitment receives a fair hearing when the expert witness rather than a lawyer is the person who, as a practical matter, represents the state. The North Carolina Court of Appeals decided that the procedure is fair.

Facts of the Case

Police officers brought Q.J. to the emergency department of the Duke University Medical Center. The officers advised hospital staff that Q.J. was “having thoughts of harming his mother,” had threatened to slit her throat in the past and was threatening suicide.

Dr. Naveen Sharma signed a Petition for Involuntary Commitment of Q.J. The petition represented that Duke University Medical Center was familiar with Q.J., that he had a history of schizoaffective disorder, that he wasn’t taking his medications, and that he had been hospitalized many times in the past under similar circumstances. 

Dr. Sharma expressed the opinion that Q.J. needed treatment to prevent further deterioration of his mental condition. Dr. Sharma believed that he would become dangerous without treatment. Dr. Sharma’s petition alleged that Q.J. was unable to care for himself adequately in the community and required inpatient hospitalization for stability and safety.

Based on the petition, a magistrate found that Q.J. was mentally ill and a danger to himself or others. The magistrate authorized a temporary inpatient commitment pending a full hearing. That hearing was held about three weeks later.

Q.J.’s Commitment Hearing

Q.J. was represented by counsel at the commitment hearing. No representative appeared on behalf of the state. Q.J.’s counsel objected to the failure of either the District Attorney’s office or the Attorney General’s office to represent the state. In their absence, a mental health expert from Duke Medical Center was the only individual representing the interests of the state.

Dr. Kristen Shirley testified in support of the commitment petition. She had never evaluated Q.J. Dr. Shirley is not a lawyer. The judge overruled Q.J.’s objection to proceeding with a doctor representing the state in an adversarial proceeding.

The judge began by asking Dr. Shirley to “tell me what it is you want me to know about this matter.” Dr. Shirley testified about the results of Q.J.’s two mental health evaluations following his detention. She opined that Q.J. responded well to medication but has limited insight into his mental health status and was likely to stop taking medication if he was released to the community. 

Dr. Shirley testified that Q.J. posed a high risk of decompensating if he were released. In her opinion, decompensation might cause him to become suicidal or homicidal. Dr. Shirley testified that a community treatment team recommended hospitalization to stabilize Q.J.’s condition, including treatment with a long-acting injectable medication. Dr. Shirley recommended a 30-day commitment.

Q.J.’s attorney cross-examined Dr. Shirley. In the absence of any lawyer representing the state, the judge conducted his own redirect examination. The judge found that Q.J. was mentally ill and that the mental illness made him dangerous to himself and others. The judge ordered a 30-day civil commitment.

Expert Witness as Representative of the State

Q.J. appealed. The most significant issue on appeal was whether, in an adversarial system of justice that depends on two opposing parties being represented by counsel, a civil commitment proceeding can proceed when the only advocate for the state is a testifying expert witness, not a lawyer.

When a lawyer represents the state, the lawyer asks questions and the expert witness answers them. The judge’s role is limited to ruling on objections. Under those circumstances, the judge can remain impartial.

With no lawyer representing the state, the judge left it to Dr. Shirley to decide what testimony to give. When the judge invited her to “tell me what you want me to know,” the judge put the expert in the position of making legal judgments about the evidence that the judge should hear. The expert likely had some awareness of the evidence that is required to meet the legal standard for a commitment, but she was trained to make medical judgments, not legal judgments about the relevance of particular facts.

Q.J. complained that placing the expert in the dual role of lawyer and witness raised questions about the judge’s impartiality. The court got the ball rolling by asking Dr. Shirley to tell him what she wanted him to know, arguably asking the kind of question that a lawyer for the state would have asked. 

More troubling were the questions that the judge asked on redirect in an apparent attempt to rehabilitate Dr. Shirley’s testimony. On cross-examination, Dr. Shirley admitted that Q.J. had not actually engaged in violent behavior before the police took him into custody, that he made no threats and expressed no suicidal thoughts while he was being evaluated, and that he had no history of harming others. When, on redirect, the judge asked Dr. Shirley if her testimony was that Q.J. was a danger to himself and whether he was a danger to others, the judge was asking the kind of rehabilitative questions that a lawyer for the state would be expected to ask.

The appellate court concluded that North Carolina law does not require the state to be represented by counsel when commitment hearings are held at a private hospital. The court further concluded that there is no constitutional barrier to having an expert witness present all the testimony required to support a commitment without having a lawyer elicit that testimony. 

Perhaps because it was unwilling to upset the apple cart of North Carolina’s unusual procedure, the court concluded that the judge always remained impartial. While it is true that judges are entitled to question witnesses to clarify testimony if they do so without becoming an advocate for a party, the judge’s redirect examination was exactly the kind of questioning one would expect from an advocate, not from an impartial judge.

Policy Issues

North Carolina’s commitment procedure places expert witnesses in a difficult position. It is usually improper for a witness to give narrative testimony rather than responding to specific questions. Narrative testimony makes it difficult for opposing counsel to object since no question has been posed to which an objection can be lodged. Narrative testimony also makes it more likely that a witness will stray from the facts that are relevant to the proceeding.

Appellate judges are extraordinarily reluctant to find that trial judges were anything other than impartial. While the record makes clear that the judge acted as an advocate for the state by asking questions to rehabilitate the testimony of the expert witness, the appellate court refused to equate advocacy with partiality. As long as North Carolina continues to leave the presentation of evidence at commitment proceedings to expert witnesses rather than lawyers, the fairness of commitment proceedings — and unfairness to expert witnesses — will continue to be an issue.

Doctor

MedMal Cases with Unclear Causes of Death or Injury Can’t Proceed Without Sufficient Expert Testimony in Arizona

The Arizona Supreme Court has ruled that medical malpractice cases involving unclear causes of death or injury cannot proceed without sufficient expert testimony to provide guidance to jurors.

The Incident

In March 2012, Michelle Sampson took her four-year-old son, Amaré Burks, to the Surgery Center of Peoria for a scheduled tonsillectomy and adenoidectomy. Surgery Center of Peoria is an outpatient surgery clinic, and the scheduled procedure was considered routine with an extremely low complication rate.

Dr. Guido administered the general anesthesia and Dr. Libling performed the procedure.  Dr. Libling remained with Amaré for about thirty minutes after the surgery and then transferred him to the post-operative anesthesia care unit. Nurse Kuchar attended Amaré in the recovery room. After sixty-one minutes, Amaré scored eight out of eight on a vitals-release test and he was released to his mother’s care.

Sampson took Amaré home and put him to bed. She had been told that it was normal for a patient to sleep after surgery. Approximately two hours after his discharge, Samson checked on Amaré, but he was not breathing. Emergency personnel were unable to revive him.

The Lawsuit

Sampson brought a wrongful death action against the Surgery Center, Dr. Guido, and other defendants.

Sampson identified Dr. Greenberg as her expert witness to establish cause of death, proximate cause, and standard of care.

Dr. Greenberg testified that “(1) one hour was insufficient to assess a pediatric patient for discharge and that three hours was appropriate, especially for a child with a history of sleep apnea; (2) the anesthesiologist fell below the standard of care by discharging Amaré before that time and Amaré’s death could have been prevented with longer observation in the PACU; and (3) Amaré died from being rendered unable to breathe from the after-effects of surgery and anesthesia, as his pharyngeal tissues were swollen and obstructed his upper airway, and the residual effects of anesthesia did not allow him to awaken to overcome the obstruction.” 

Dr. Greenberg also opined that the standard of care required between one and three hours of observation before release.

The Surgery Center and Dr. Guido filed motions for partial summary judgment and argued that Dr. Greenberg’s testimony did not establish that their actions had proximately caused Amaré’s death. The trial court agreed and entered final judgment against Sampson. She appealed and the court of appeals reversed, finding that a reasonable jury could determine that the standard of care for observation was three hours.

The Arizona Supreme Court Decision

The Arizona Supreme Court granted review to determine whether the court of appeals erred.

Upon review, the Arizona Supreme Court noted that Arizona law requires that in medical malpractice cases, “causation must be established by competent expert testimony, and the narrow exception is that a jury may infer such causation if malpractice is ‘readily apparent.’”

The court stressed that in this case, expert testimony establishing causation was essential. However, disagreement existed over the cause of Amaré’s death. “Whereas the autopsy report stated that Amaré died from a ‘disseminated Strep Group A’ infection, Dr. Greenberg opined he died from a ‘swollen and obstructed upper airway’ combined with his inability ‘to breathe from the after-effects of surgery and anesthesia.’ Given that even the medical experts did not agree on the cause of death, it is unrealistic to conclude, as the court of appeals did, that a jury “could properly infer that the early discharge was the probable cause of Amare’s death.”

The court determined that, in this case, the court of appeals had departed from the proper standard for proving causation by allowing the jury to determine causation based on speculation built upon inference. Accordingly, it reversed the court of appeals decision.

FTC

The FTC Is Searching for Expert Witness

Expert witnesses are in short supply at the Federal Trade Commission after Carl Shapiro, an economist who teaches at the University of California – Berkeley, declined to participate in the FTC’s antitrust suit against Facebook.

Facebook Litigation

In December 2020, the FTC sued Facebook for engaging in anticompetitive practices. The FTC alleged that Facebook has maintained a monopoly position in the social networking industry by acquiring competitors, including Instagram and WhatsApp. The FTC also contended that Facebook imposed anticompetitive conditions on software developers.

Facebook has acquired a number of smaller companies that play varying roles in the social media industry, including Kustomer (customer service chatbots), Snaptu (smartphone apps), Beluga (messaging apps), Face.com (facial recognition), and Onavo (mobile analytics). The acquisitions have typically been folded into Facebook. Beluga’s messaging app, for example, became Facebook Messenger.

The FTC alleged that Facebook’s “course of conduct harms competition, leaves consumers with few choices for personal social networking, and deprives advertisers of the benefits of competition.” The FTC wanted Facebook to sell Instagram and WhatsApp, to stop imposing anticompetitive conditions on software developers, and to seek prior approval from the FTC before acquiring new companies.

The case has not gone well for the FTC. In late June, the judge dismissed the FTC’s complaint after finding that the FTC did not allege sufficient facts to demonstrate that Facebook was actually a monopoly. The court also dismissed a companion suit by state attorneys general because the states waited too long to file it. 

The court gave the FTC an opportunity to amend its complaint to allege more facts that would demonstrate a violation of antitrust laws. The FTC requested an extension of time to prepare and file its amended complaint, apparently in response to its loss of Shapiro’s services.

Missing Expert

The FTC may be able to draft an amended complaint without expert assistance, but the agency has traditionally relied upon economists to make a case against alleged monopolists. It hired Shapiro in 2019, paying his economic consulting firm more than $5 million.

Politico reports that the FTC paid Shapiro almost twice as much as it paid for other expert services during the past two years “at a time the agency has told Congress it is strapped for cash.” That might not be surprising. With a value of more than $1 trillion, Facebook can afford to fund an aggressive antitrust defense. The FTC will need to spend some money if it has any hope of proving its case.

Shapiro did not respond to Politico’s inquiries about the reasons for his departure. Politico notes that Shapiro has criticized new FTC Chair Lina Khan’s “aggressive approach to antitrust enforcement.” The social value of breaking up large companies is controversial, with many economists arguing that businesses should not be punished for their success. An economic system that depends on competition, after all, should recognize that the best competitors may drive less innovative competitors out of the marketplace. 

On the other hand, tactics that intentionally stifle competition, such as buying direct competitors and shutting them down to maintain market dominance, are harmful to consumers. A disagreement about where to draw the line in antitrust enforcement may narrow the field of economists who are willing to act as expert witnesses for the FTC.

Paying for Experts

The FTC confirmed Shapiro’s departure and its need for a new expert:

“The FTC does not comment on internal deliberations over any particular expert engagement,” spokesperson Lindsay Kryzak told POLITICO. “But the agency routinely reviews its expert support needs, including to ensure that the agency is making the best use of limited public funds while carrying out its law enforcement obligations.”

A 2019 audit found that the FTC pays about $750 per hour for experts. The audit recommended that the FTC use its in-house economists as experts in most cases. However, a former staffer told Politico that the FTC doesn’t have the computing power or infrastructure to take on the massive data analysis required in large antitrust cases.

Politico determined that the FTC spent $21.3 million for expert witness services during the fiscal year that ended on September 30, 2020. In the current fiscal year, the FTC has spent $25 million. The amounts paid vary between a few thousand dollars and several million, depending on the work required. The $5.7 million that the FTC paid to Shapiro’s firm is nearly twice the amount the FTC paid for expert witness services in any other case.

Outside experts told Politico that the largest expenditure of funds usually occurs later in the case, when experts are required to write reports. The FTC typically pays $1 million to $2 million for expert reports in complex cases. The Facebook litigation, however, is far from typical. One expert explained that the Facebook case involves “mind bogglingly large quantities of data.”

The new expert may be able to use the data that Shapiro already furnished to the FTC. To defend his or her opinions in court, however, the expert will need to conduct a fresh analysis of that date. That will likely require significant funding.

As a former FTC staffer explained to POLITICO, politicians and the public have complained that tech firms like Facebook are “too big,” although their reasons for making those complaints sometimes have little to do with economics. The staffer laments that the same people complain that the FTC spends too much money on expert witnesses. Noting that critics can’t have it both ways, the former staffer argued that Congress needs to give the FTC adequate resources if it wants the agency to tackle economic behemoths like Facebook.

Expert Testimony May Be Necessary to Counter ShotSpotter Evidence

The latest technology to capture the attention of law enforcement is called ShotSpotter. The manufacturer claims that hidden microphones installed in neighborhoods can tell the difference between gunshots and other loud noises. Rather than waiting for someone to report a shooting, police agencies that rely on the technology dispatch officers to the location where the shots were allegedly fired.

The technology has generated criticism. Apart from concerns about the concentration of microphones in black neighborhoods, the Electronic Frontier Foundation is worried that police agencies might use the microphones to eavesdrop on private conversations. Whether the ShotSpotter system reduces gun violence seems doubtful. 

From the standpoint of an expert witness blog, the question is whether defense attorneys should use expert witnesses to challenge ShotSpotter evidence in court. There is good reason to think that Daubert challenges should be filed, and experts employed, whenever ShotSpotter evidence is a critical component of the prosecution’s proof.

Investigations of ShotSpotter 

A recent investigation calls into question the evidentiary value of ShotSpotter reports. Last year, Michael Williams brought a shooting victim to a Chicago hospital. Williams said the victim was shot during a drive-by shooting. After the victim died, the police arrested Williams for the victim’s murder. Why Williams would bring the victim to a hospital if Williams intended to kill him is a question that raises serious doubt about Williams’ guilt.

The police built their case on video and ShotSpotter evidence. The video evidence showed only that Williams’ car had stopped in the 6300 block of South Stony Island Avenue at 11:46 p.m. on the night of the shooting. The police contended that the victim was shot at that location. No video evidence supports that contention.

The police contend that they received a “shots fired” alert from ShotSpotter at the Stoney Island location. In fact, company records show that “19 ShotSpotter sensors detected a percussive sound at 11:46 p.m. and determined the location to be 5700 South Lake Shore Drive—a mile away from the site where prosecutors say Williams committed the murder.” The company’s algorithms identified the sound as an exploding firework.

Company records show that “a ShotSpotter analyst manually overrode the algorithms and ‘reclassified’ the sound as a gunshot.” Months later, a different ShotSpotter employee manually changed the alert’s coordinates to a South Stony Island Drive location near the place where Williams’ car can be seen on camera.

The evidence suggests that ShotSpotter changed its data to support the theory that Williams shot the victim. Williams’ lawyer filed a motion that challenged the ShotSpotter evidence, arguing that it failed to meet the Illinois standard for the admissibility of expert opinions. Rather than defending against the motion, prosecutors agreed not to use ShotSpotter evidence against Williams. 

Daubert Challenges to ShotSpotter Evidence

The investigation suggests that the Chicago incident was not an isolated example of ShotSpotter tailoring its conclusions to match law enforcement theories. In a carefully worded statement, ShotSpotter denied that it has ever “altered the information in a court-admissible detailed forensic report based on fitting a police narrative.” The statement claims that ShotSpotter is “100{d61575bddc780c1d4ab39ab904bf25755f3b8d1434703a303cf443ba00f43fa4} accurate,” a claim of certainty that many reputable forensic science experts condemn. The statement asserts that ShotSpotter has been admitted over ten Frye challenges and one Daubert challenge, but it does not state how many challenges to admissibility have succeeded.

ShotSpotter commissioned a report by CSG Analysis, a “police officer-owned and operated company,” that is filled with unsurprising praise of ShotSpotter. The report does not purport to be based on a scientific analysis. Rather, it is based on interviews with police officers in cities that have paid to install ShotSpotter. 

Despite the report’s obvious credibility issues, the authors acknowledge that false positives — sounds that could be caused by trucks, dumpsters, construction, church bells, and all the other sources of concussive sound — are a significant operational problem with ShotSpotter. In two of the seven jurisdictions where interviews were conducted, half of the ShotSpotter alerts were believed to be false positives. A recent study found that Chicago police officers investigated 40,000 ShotSpotter alerts in a 21-month period that resulted in no evidence that shots had been fired.

Challenges to ShotSpotter Evidence

ShotSpotter claims that its analysts can identify actual gunshots when evidence is needed for court. The analysts allegedly perform a deeper dive into the data than the system’s algorithms perform. ShotSpotter contends that a more reliable human analysis explains why results are changed after police agencies contact ShotSpotter. Since one purpose of algorithms is to eliminate human bias, one might wonder whether second-guessing algorithms calls either the algorithms or the analyst’s opinions into question.

The admissibility of ShotSpotter evidence, whether generated by algorithms or humans, is not a foregone conclusion. A Daubert challenge should focus on whether ShotSpotter results been accepted by any independent scientific community, whether its analytical system has been peer reviewed, whether it has a known error rate, and whether conclusions drawn by analysts have been verified by independent testing. The National Juvenile Defender Center has compiled materials, including transcripts of testimony that ShotSpotter witnesses have given at Daubert/Frye hearings, that may guide those challenges.

Notably, a California appellate court reversed a conviction based on ShotSpotter evidence because the trial court did not hold a pretrial hearing to determine whether the evidence was reliable. The court noted the dearth of appellate opinions considering the admissibility of ShotSpotter evidence and concluded that courts could not assume the reliability of the novel technology.

ShotSpotter offers to supply prosecutors with expert witnesses who will testify in court for $350 an hour with a two-hour minimum. Retaining a defense expert with a background in acoustic science may be critical to countering those experts and to bringing a successful Daubert challenge. 

Expert Testimony Excluded in Michael Mann Defamation Lawsuit

More than two decades ago, Michael Mann and two other climate scientists published an article in Nature. The article included a graph showing that the Earth’s temperature had been stable for 500 years but had spiked upward in the twentieth century. A year later, they extended the graph to cover an entire millennium, supporting their argument that the upward spike in temperature was unprecedented within the last thousand years.

Because the spike at the end of the graph resembles a hockey stick, it came to be known as the “hockey stick graph.” As Mann explained in a 2018 article in Scientific American, the hockey stick graph received widespread attention, both from neutral media sources and from the fossil fuel industry. According to Mann, “industry-funded attack dogs” began a campaign to discredit him personally in order to “discredit the iconic symbol of the human impact on our climate.”

Criticism of Mann’s research was part of a broader assertion that climate scientists had fabricated evidence to demonstrate that human activity is responsible for global warming. Hacked emails were said to support a conspiracy that came to be known as Climategate. Neutral investigations determined that the conspiracy claims were unfounded. 

Mann in particular was accused of manipulating or altering research findings, suppressing data, deleting emails to conceal wrongdoing, misusing confidential information, and engaging in other forms of scientific misconduct. An investigative panel at Pennsylvania State University, Mann’s employer, exonerated Mann of all the ethical charges that were lodged against him. The panel found that Mann did not seriously deviate from accepted practices within the academic community.

Mann’s Defamation Claim

In 2012, the Competitive Enterprise Institute (CEI) published a blog that criticized Penn State’s exoneration of Mann. The CEI has strong ties to the petrochemical, pharmaceutical, and tobacco industries.

As Jonathan Adler explains, “the post’s author, Rand Simberg, suggested Penn State was no more diligent investigating Mann than it had been investigating Jerry Sandusky. Mark Steyn quoted and elaborated on the CEI post with a post of his own on National Review Online.”

Mann sued a number of individuals and entities for libel and intentional infliction of emotional distress. Most of the defendants, including the CEI and National Review, have been dismissed from the suit, largely because Mann could not meet the high standard of proving that the allegedly defamatory statements about him were not just false but malicious. The two blog authors are the remaining defendants.

As the case finally approaches trial, the district court considered competing motions to exclude the testimony of expert witnesses relied upon by both sides. In a lengthy order, the court excluded all but one expert witness.

Court Decision

The district court emphasized that the lawsuit did not place climate science on trial. The only question raised by the lawsuit was whether the blog authors defamed Mann when they accused him of “molest[ing] and tortur[ing] data in the service of politicized science[,]” “engaging in data manipulation[,]” and creating the “fraudulent climate-change ‘hockey-stick’ graph[.]” 

Still, the blog authors contended that their statements were true. The defamation claim therefore turns in part upon the validity of Mann’s research. In that sense, the lawsuit does put climate science on trial, at least to the extent that Mann contributed to climate science with the hockey stick graph.

The bloggers intended to rely on the testimony of two experts in the fields of climate science and statistics “to lend to the credence and the legitimacy of the allegedly defamatory statements.” Mann intended to call seven experts to establish that the authors’ claims were false and defamatory.

The court determined that, apart from the statistician’s opinions, none of the expert testimony was admissible. Remarkably, the court decided that none of the experts (apart from the statistician) based their opinions on a reliable methodology. In the court’s view, the experts instead based their opinions on “documents and articles that they have reviewed.”

Relying on documents and articles is hardly an unusual practice for experts. According to the court, however, an expert only follows a reliable methodology when the expert “systematically gathers, organizes and catalogs the documents such that another expert with similar training could follow the same procedure and arrive at the same result.” Regurgitating the opinions of others without analyzing and synthesizing the opinions, the court said, is not a scientific methodology. 

According to the court, the experts merely summarized selected research studies and offered opinions about their conclusions. Yet some of the experts would have explained science to the jury. When experts are merely educating a jury rather than expressing opinions, the search for a “methodology” is misplaced.

Mann’s History of Science Expert

Take, for example, the proffered testimony of Naomi Oreskes, a professor of the History of Science at Harvard. She proposed to testify about the factors that make scientific opinions reliable. When asked what scientific methodology supported her opinions, Dr. Oreskes testified “reading and thinking.” The court ruled that reading and thinking is not a scientific methodology because jurors are just as capable of reading and thinking as are experts. 

Yet jurors are not as well equipped as an expert in the history of science to understand how scientific research is conducted. That testimony might be characterized as fact testimony, informed by an expert’s understanding of the facts, rather than opinion testimony that must be supported by a methodology.

To the extent that Professor Oreskes would have applied the scientific method to the development of the hockey stick graph in order to opine that Mann followed reliable practices in developing the graph, it is difficult to understand why her testimony was excluded. The professor’s methodology was to apply the scientific method to Mann’s work and to express an opinion that his work was consistent with accepted research principles. The application of standards to a set of facts certainly seems like a reliable methodology.

The court complained that the reliability of the professor’s methodology could not be established because it was not peer reviewed, had no established success rate, and cannot be replicated by other experts in the field. Yet the professor was not conducting an experiment. Daubert emphasized that the test of reliability is flexible. Not every expert methodology depends on error rates or peer-reviewed research. The court’s narrow understanding of the Daubert decision shaped its decision to exclude most of the testimony that the parties proffered.

Professor Oreske also proposed to testify about the agenda-driven misuse of science by “think tanks” such as CEI that “ignore, misrepresent, or reject principled scientific thought on environmental and climate issues.” The court excluded her opinions about CEI’s history of distorting science to serve the ends of industry because she recounted CEI’s doubtful publications but “made no effort to compile or catalogue CEI’s publications according to an objectively defined set of metrics.” Perhaps Professor Oreske can testify as a fact witness regarding CEI’s history and allow the jury to draw its own conclusion about CEI’s bias.

Peter Frumhoff

Peter Frumhoff, the Director of Science and Policy and Chief Climate Scientists at the Union of Concerned Scientists, would have testified that the stolen emails revealed reasonable scientific techniques, not wrongdoing. The court again found that Frumhoff used no scientific methodology to reach that opinion. Testifying about the existence of a scientific technique and then analyzing an email to determine whether it describes a legitimate scientific technique sounds very much like a reliable methodology. Again, not all methodologies involve experiments and error rates.

Frumhoff would have testified that public attacks on scientists “can have detrimental effects on scientists, on the scientific enterprise, and on the public understanding of information and its societal implications.” That seems like fact testimony given by a knowledgeable observer about how attempts to disparage science diminish respect for scientists. It is doubtful that any methodology beyond observation is necessary to testify about the historical impact of unfounded criticism on science and scientists.

Frumhoff also proposed to testify about the ways in which climate change denial harmed Mann and climate science in general. The court concluded that the evidence was not relevant because the issue was whether Mann was defamed, not whether climate change is caused by human activity. The court decided that CEI’s alleged attempt to stifle climate science to support the agenda of the fossil fuel industry did not cause a harm that was unique to Mann and was therefore not relevant. To the extent that Frumhoff would have explained how reputational damage impairs the ability to obtain grants, the court decided that jurors understand how grant funders take an applicant’s reputation into account when they consider whether to fund a grant. Since most jurors have no idea how grant funders operate, the court’s analysis fails to appreciate how Frumhoff’s specialize knowledge would have assisted the jury.

Mann’s Other Experts

Other experts would have provided background information about the history of climate change. That evidence would have helped the jury decide that the hockey stick graph was not fraudulent. The court decided that the testimony was not relevant because it did not address the specific accusations of fraud made in the blogs.

The court excluded a computer scientist who was an expert in disinformation “because the Court has not previously been made aware of a field of study dedicated solely to tracking misinformation.” Judges are experts in the law, although they often fancy themselves to be experts in everything. The court’s lack of awareness that experts track disinformation illustrates why judges should defer to experts. 

Gerald North chaired a National Research Council committee that investigated Mann’s work. He was prepared to testify that Mann’s research was valid, honest, and conducted in a scientifically appropriate manner. The court rejected that opinion, despite its obvious reliance on an expert evaluation of Mann’s research, because the court viewed North as reciting the committee’s opinion. Since committees can’t testify, one might think that the committee chair would be well positioned to describe how the committee arrived at its conclusions. The court again discussed error rates and peer review as if all expert knowledge depends on peer-reviewed research with known error rates.

Raymond Bradley would have testified as both a fact and expert witness about the research that supported the hockey stick graph. He would have refuted the bloggers’ claims about the invalidity of that research. The court decided that the “principles and methodologies he used” to conclude that Mann selected appropriate data points for his graph were insufficiently explained. That seems like an attack on the expert’s credibility, not on the reliability of the methodology the supports the expert’s opinions.

Defense Experts

Judith Curry has a doctorate in atmospheric science. She proposed to testify that it was reasonable to refer to the hockey stick graph as fraudulent in the sense that it was deceptive and misleading. She based her opinion on Climategate emails and public accusations of fraud. Of course, the fact that others may have defamed Mann does not prove that the bloggers did not.

Curry did not claim that Mann’s conclusions were fraudulent. She opined that it would be reasonable for members of the public to believe that they were fraudulent. The court determined that her opinion was inadmissible because it invaded the province of the jury. Whether the bloggers defamed Mann depended on whether they acted with actual malice. An expert in atmospheric science has no scientific basis for determining the bloggers’ state of mind. To the extent that she offered a lay or expert opinion about whether the public could reasonably perceive the hockey graph as fraudulent, she was no more qualified to do so than reasonable jurors.

Curry also proposed to testify about the deficiencies in Mann’s construction of the hockey stick graph without specifically disagreeing with his conclusion about the impact of human activity on climate change. The court concluded that Curry employed no scientific methodology to form that opinion. Rather, she “reviewed several articles and documents, and then opined that the conclusions of those documents are correct. Such methodologies are not derived from the scientific method and, thus, render Dr. Curry’s opinion unreliable as expert testimony.”

The court allowed a statistician, Abraham Wyner, to testify for the defense. Wyner plans to testify that there are “aspects of Dr. Mann’s work that can reasonably be construed as manipulative, if not in intent than in effect, as the word is used in common parlance.” The court concluded that Wyner “offers detailed analysis of the statistical methods used to construct the Hockey Stick graph, and why the methods may be reliable and, thus, tending to support a basis for Defendants’ statements.” The court was apparently satisfied that a discussion of statistics by a statistician constitutes a reliable methodology while the discussion of climate science by climate scientists did not.

Lessons Learned

The difference between an expert’s explanation of facts — an explanation based on knowledge that a lay juror is unlikely to possess — and an expert’s opinion testimony can create gray area that lawyers need to anticipate. Lawyers also need to anticipate attacks upon methodologies that do not rely on error rates and peer-reviewed research. Finally, lawyers need to make sure that experts explain their methodologies in sufficient detail to allow judges to understand that their opinions are sufficiently reliable for a jury to consider. Unfortunately, when judges are determined to exclude evidence regardless of its merit, there is little that lawyers can do apart from hoping that an appellate court might eventually take a different view.

Gavel and Stethoscope on Reflective Table

Georgia Court Reinstates Malpractice Verdict Despite Expert’s Equivocal Testimony About a Nationwide Standard of Care

Connie Lockhart was treated in a hospital emergency room in Cherokee County, Georgia. An emergency room physician, Dr. Glenn Bloom, mistakenly placed a catheter in her femoral artery rather than a femoral vein. The accumulation of medications administered through the catheter destroyed the tissue in her leg, resulting in its amputation.

Lockhart sued Dr. Bloom for medical negligence. Lockhart relied on the expert testimony of Dr. Eric Gluck to establish a breach of the standard of care for inserting a femoral catheter. Dr. Gluck had been board certified in critical care medicine for 27 years and had extensive experience placing central venous catheters in the femoral region.

Dr. Gluck is not an emergency room physician. He testified that he runs the ICU at the Chicago hospital where he is employed. He also testified that he is chair of a critical care committee that sets hospital policy for critical care departments, including the emergency department. He had personal, recent experience placing femoral catheters in an ICU but not in an emergency room.

Dr. Gluck testified that emergency room physicians, critical care physicians, and general practice physicians all follow the same standard of care for placing a femoral catheter. The standard of care requires the physician to identify the correct vein in which to insert the catheter and to follow an accepted procedure for its insertion. After it is inserted, the standard of care requires the physician to use one of four methods to confirm that it was inserted in the correct location.

Dr. Bloom acknowledged that he mistakenly inserted the catheter into the femoral artery but contended that he had no need to confirm its placement in the correct location because he did not suspect that it was placed incorrectly. The standard presumably requires doctors to double-check their work precisely because they might not suspect that they erred. Dr. Gluck expressed the opinion that Dr. Bloom breached the standard of care by failing to confirm the catheter’s placement in its intended location.

On cross-examination, Dr. Gluck admitted that he did not know whether emergency physicians are taught to verify the placement of a catheter. He knew that physicians in the Chicago hospital where he worked were required to verify the catheter’s placement, but he did not know whether that was a nationwide standard. On redirect, Dr. Gluck testified that he was confident that confirming the placement of catheters is a standard that applies regardless of geographical location. 

Directed Verdict

After the presentation of evidence concluded, Dr. Bloom moved for a directed verdict on the ground that Dr. Gluck was not qualified to articulate the standard of care that applies to Georgia emergency room physicians who place femoral catheters. The trial court concluded that Dr. Gluck’s testimony was “equivocal” as to whether a nationwide standard of care existed. The court concluded that Dr. Gluck was not able to testify about the specific standard of care that applies to emergency room physicians in Georgia, a state in which he never practiced medicine.

The court ruled that Dr. Gluck’s testimony did not establish a standard of care and that Lockhart therefore failed to prove a breach of that standard. The court thus directed a verdict in favor of Dr. Bloom despite the obvious harm he caused to his patient.

Nationwide Standard of Care

Courts define a standard of care as the care, skill, and treatment that, under the circumstances, is recognized as appropriate by reasonably prudent healthcare providers who practice in the same or a similar field of medicine. During much of the nation’s legal history, plaintiffs in medical malpractice cases were required to prove the standard of care that applied in the community where the treatment was provided. Courts were concerned that doctors in rural communities should not be held to the same standards as big city doctors because they had no opportunity to learn of “modern” practice trends that were implemented in remote urban locations.

The “locality rule” began to change as communication and transportation barriers disappeared. Many courts allowed expert testimony about the standard of care in similar communities within a region when physicians had the opportunity to gain experience and keep abreast of medical developments by visiting those communities.

By the late twentieth century, it was clear to most courts that doctors everywhere have the same opportunity, and thus the same responsibility, to educate themselves about best medical practices. Medical journals are available nationwide. Travel to continuing education programs in larger communities was no longer burdensome. With the advent of the internet, webinars bring continuing education programs to physicians in remote parts of the country.

Most courts now agree that it isn’t unfair to expect all physicians within a specialty to be familiar with standards that are widely regarded as necessary to protect patients from harm. For the most part, courts accept that nationwide standards of care, rather than local standards that vary from community to community, are necessary to assure that patients receive care that is consistent with medical advances known to average physicians in the United States who practice in a particular specialty.

Some jurisdictions cling to the “locality” or “similar community” rule. Those jurisdictions seem more interested in protecting physicians from liability for their failure to learn about current standards of care than in protecting patients from negligent care.

Appellate Analysis

The Georgia Supreme Court agreed that Dr. Gluck’s testimony was inconsistent. During his direct and redirect testimony, Dr. Gluck described a nationwide standard of care that applies to all physicians who place femoral catheters. On cross-examination, he admitted that he does not know if that standard of care is taught in all emergency room residency programs. He explained that his knowledge was based on his own experience in establishing a hospital-wide standard of care in Chicago.

Proving the existence of a nationwide standard doesn’t necessarily require proof that every medical school in the nation teaches that standard. No single expert is familiar with the teaching practices in every medical school. By virtue of their own experience attending conferences and meeting with other physicians who practice in a specialty, experts are often capable of forming an opinion that a nationwide standard of care has emerged.

Dr. Gluck’s testimony nevertheless created some uncertainty as to whether he was describing a national standard or a standard that applied only in the Chicago hospital where he worked. However, the supreme court noted that Dr. Gluck’s testimony was admitted into evidence without objection. Dr. Bloom did not make a Daubert challenge to Dr. Gluck’s qualifications or to the facts and methodology that informed his opinion. Instead, Dr. Bloom laid in the weeds and first raised his objection in a motion for a directed verdict after Dr. Gluck’s opinion was already in evidence.

In Georgia, a directed verdict should be granted only if there is no evidence that would support a verdict in the party’s favor. Dr. Gluck testified that a nationwide standard of care existed. If he contradicted that testimony, it was up to the jury to decide which of Dr. Gluck’s contradictory statements to believe. When the trial court decided that Dr. Gluck’s testimony was “equivocal,” the court was commenting upon the expert’s credibility, not upon the existence of evidence that would support the verdict. 

The credibility of an expert witness is always for the jury to decide. The trial court erred by disregarding expert testimony that supported the verdict. The supreme court accordingly reversed the judgment.

Lessons Learned

Lawyers can learn two lessons from the Lockhart decision. First, when an expert witness needs to establish a standard of care, lawyers should prepare the witness to explain why the expert believes that the standard has been adopted nationwide. Reference to medical textbooks, journal articles, or seminars with a nationwide audience may support the belief that a nationwide standard of care exists.

Second, lawyers who want to challenge an expert’s opinion should not wait until the evidence is closed to bring that challenge. Making a Daubert motion before trial or objecting during the trial may result in exclusion of the evidence. While lawyers, as a tactical matter, might not want to alert opposing counsel to deficient testimony while counsel may still be able to correct it, laying in the weeds may allow a jury to base a verdict on testimony that could have been excluded if a timely objection had been made.