🤖 AI Summary
Existing law firm rankings overemphasize reputation and size while neglecting empirical litigation performance, exacerbating information asymmetry between clients and firms.
Method: This paper introduces the first empirically grounded framework for assessing law firm capability based on actual litigation outcomes. We model 60,540 U.S. civil cases as pairwise competitions between plaintiff and defendant counsel, adapt the Bradley–Terry model to legal service evaluation for the first time, and integrate structured litigation text analysis, a bidirectional law-firm–case graph, and dynamic calibration.
Contribution/Results: The resulting ranking significantly outperforms traditional benchmarks—including *The American Lawyer* and *Chambers & Partners*—in both AUC and calibration accuracy. After controlling for case type and jurisdiction, a one-standard-deviation increase in the ranking score correlates with an average 12.3% higher win probability. This provides a verifiable, predictive, and outcome-based benchmark for legal service markets.
📝 Abstract
Selecting capable counsel can shape the outcome of litigation, yet evaluating law firm performance remains challenging. Widely used rankings prioritize prestige, size, and revenue rather than empirical litigation outcomes, offering little practical guidance. To address this gap, we build on the Bradley-Terry model and introduce a new ranking framework that treats each lawsuit as a competitive game between plaintiff and defendant law firms. Leveraging a newly constructed dataset of 60,540 U.S. civil lawsuits involving 54,541 law firms, our findings show that existing reputation-based rankings correlate poorly with actual litigation success, whereas our outcome-based ranking substantially improves predictive accuracy. These findings establish a foundation for more transparent, data-driven assessments of legal performance.