The Center for Statistics and Applications in Forensic Evidence (CSAFE) funded a Duke Center for Science and Justice Study that reveals the impact of forensic evidence on jurors.
The article “Error Rates, Likelihood Ratios, and Jury Evaluation of Forensic Evidence” presents a study by Duke CSJ Director Brandon L. Garrett, JD, Research Director William E. Crozier, Ph.D., and Rebecca Grady, Ph.D. It was published in the July 2020 issue of the Journal of Forensic Sciences.
From CSAFE’s website:
“This study, written by Drs. William Crozier and Rebecca Grady, is part of a larger body of CSAFE work examining a pressing question at the intersection of law and forensic science: how do jurors understand and evaluate forensic expert reports and testimony?” said Brandon L. Garrett, study co-author and L. Neil Williams, Jr. Professor of Law at Duke University.
In the study, about 900 participants received a mock-trial scenario about a convenience store robbery where only one element linked the defendant to the crime. Participants were given either fingerprint or novel voice comparison evidence, resulting in a surprisingly significant difference in the verdict. Researchers found that jurors gave more weight to fingerprint evidence than voice comparison evidence. The study split as the mock-trial revealed information about forensic evidence error rates to certain jurors and disclosed likelihood ratios for the evidence to other jurors. Some received both forms of judicial instruction.
“We find that jurors can and do adjust the weight they place on forensic evidence when they are informed of error rates. However, they also bring with them prior views about the reliability of evidence, and those views also matter,” Garrett said.
Mock jurors presented with error rate information voted guilty less often than participants given traditional instructions that omit error rates, but only when attached to fingerprint evidence. Those who received likelihood ratios attached to fingerprint evidence placed less weight on expert testimony than testimony claiming a conclusive match to a criminal defendant. But, voice evidence-based verdicts were mostly unaffected by the inclusion of error rates and likelihood ratios.
In a final element of the study, an overwhelming percentage of participants are concerned about wrongly convicting an innocent person. When asked whether the wrongful conviction of an innocent defendant or failing to convict a guilty person was worse, 46.9% found wrongful conviction to be the worst offense. In comparison, 46.2% of participants found that wrongfully convicting an innocent person and failing to convict a guilty person to be a worst-case scenario. The remaining 6.9% of participants believed failing to convict a guilty person was the least desirable outcome and were also more likely to vote guilty overall.
“We suggest that we need testimony and judicial instructions that both tell jurors what they need to know to understand forensic evidence properly, but that also take into account where jurors are coming from and their preconceptions,” said Garrett.
Further study and future research focusing on developing these judicial instructions and testimony will aid members of the forensic science community in better-educating jurors about the strengths and limitations of forensic evidence. Thanks to these researchers’ hard work, CSAFE can begin to pinpoint an ideal method for improving forensic testimony outcomes with a goal of fewer wrongful convictions and greater forensic and statistical literacy in the legal community.