Dennis Devine

Data-Driven Case Valuation

May 11, 2020
Dr. Dennis Devine
Themevision Focus

Parties to lawsuits and their lawyers inevitably find themselves pondering one overriding question: What is our case really worth?

The answer drives every major decision in a case. Whether to negotiate. How hard to negotiate. Whether to accept an offer or go to trial. This article discusses a data-based way to help make those decisions by combining a key research insight on jury decision making and input from mock jurors.

Challenges to Case Valuation

Seasoned trial lawyers think they know something about case valuation. And why wouldn’t they? They know their facts—the good ones and the bad ones. They know the demographic characteristics of the jury pool, the history of jury awards in the jurisdiction, and the awards produced by juries in similar cases. This knowledge and experience certainly provides some insight into what a case is worth.

But it’s difficult to accurately value a case using just rules of thumb or mental models filtered through the lens of experience. Real numbers certainly come into play (e.g., actual medical bills), but the list of intangibles is often long. Pain and suffering, lost income, future medical costs, decreased enjoyment of relationships, expected life span—among others.

Jurors also differ in their backgrounds, life experiences, attitudes, and values. Diversity among jurors leads to variability in how they value cases. Reasonable minds can and will differ.

And sometimes jurors consider extralegal factors they shouldn’t. Research shows they often consider things like whether the two parties are insured, how much of any award will go to the attorneys, what will remain after taxes, etc.

How is a litigator supposed to account for all these things in valuing a case?

A Data-Based Method for Valuing a Case

It all starts with one very important and reliable finding: The social process of “converting” a set of juror award preferences into a final jury award is conservative in that groups usually choose awards near the middle of their members’ preferences.  In fact, research shows the best predictor of a jury’s award is the median award preferred by its members. Studies have found the median member award (prior to deliberation) to be highly correlated with the jury’s actual award—and more so than the mean member award.  So the median rule provides a good empirical basis for predicting a jury’s award.

And, once we have data from mock jurors, we can use the median rule to forecast the jury award in any case.

We do this by collecting data from a large pool of mock jurors and randomly choosing a mock “jury” of six people from the pool (or whatever jury size is used in the trial jurisdiction). We then estimate the award that “jury” of six people would have arrived at if they had deliberated by calculating the median award preferred by its members. This award is noted, the mock jurors are returned to the pool, and the sampling process is repeated many times.

This process creates a distribution of potential jury awards based on the preferred awards of our mock jurors. So, if we randomly draw out 1,000 different juries, we will have an award distribution consisting of 1,000 estimated jury awards. But what use is this?

Our potential award distribution provides two very important pieces of information: (1) the most likely jury award, and (2) the most likely range of potential awards. This award distribution will be bell-shaped, with most awards in the middle but a few that are extremely high or low. Most importantly, the mean of our potential award distribution is statistically the most likely award if we picked one jury at random out of the pool of mock jurors. And the closer a potential award is to the mean of our award distribution, the more likely we would be to get that award from a randomly selected jury.

And we can go even further. We can use probability theory to construct a confidence interval (often 95%) around the most likely award to get a sense of the range of jury awards we might see. So, for example, we might learn that if we were to pull out one random jury, its most likely award would be $1.7 million. And if we pulled out 100 juries at random, 95 of their awards would be expected to fall between $1.3 million and $2.1 million (i.e., a 95% confidence interval).

But why not just estimate awards using the preferred awards of the individual mock jurors? The short answer is that their distribution is very different from the mock juries’ award distribution. The individuals’ distribution is much more skewed and much more spread out. Predicting jury awards from individual data basically results in a lot more prediction error.

Technology now makes it easy to collect data from a large and diverse group of mock jurors from the trial jurisdiction via the internet. We can present a concise summary of key facts and arguments, even show pictures or video clips, then ask for judgments related to liability and damages. And it can all be done for a fraction of a case’s value in a fairly short period of time.

Of course, the output from mock jurors is only as good as the inputs. The information given to the mock jurors needs to be carefully assembled, vetted, and presented. This is where seasoned trial counsel plays an important role. These folks must advise the research team on the information likely to come into evidence, the manner in which it will be presented at trial, and the arguments both sides will make about it.

Valuing a case is difficult. But case-specific data can make it less so. Armed with a forecast of potential jury awards and their likelihood, parties and their counsel can make better decisions about settlement and trial strategy by better answering the underlying question: What’s this case really worth?


Dennis J. Devine, PhD, MJ

Dennis Devine is a Litigation Consultant with ThemeVision LLC.

He is the author of Jury Decision Making:  The State of the Science (2012), a book published by New York University Press that summarizes the scientific research on juries and offers an integrative theory of how they reach decisions.

Charles P. Edwards, Partner

Charles Edwards is a partner at Barnes & Thornburg LLP, where he is co-chair of the firm’s Insurance Recovery and Counseling practice. He is a seasoned litigator with over 25 years of experience prosecuting claims and lawsuits for insurance coverage and prosecuting and defending lawsuits for other losses and damages.