🤖 AI Summary
To address the low accuracy, poor stability, and high cost of commercial large language models (LLMs) in legal text annotation, this paper proposes a novel paradigm: replacing generic prompt engineering with lightweight supervised fine-tuning (SFT) of open-source small language models (e.g., Llama and Phi series). Our contributions are threefold: (1) We introduce CaselawQA—the first large-scale legal annotation benchmark—comprising 260 fine-grained tasks, which systematically exposes performance bottlenecks of state-of-the-art closed-source models (e.g., GPT-4.5, Claude 3.7); (2) With only hundreds to one thousand annotated examples, SFT enables small models to significantly outperform commercial LLMs across most tasks; (3) We empirically validate a specialized, cost-efficient, and reproducible pipeline for legal NLP, establishing a new methodology and empirical foundation for domain-adapted language model research.
📝 Abstract
Annotation and classification of legal text are central components of empirical legal research. Traditionally, these tasks are often delegated to trained research assistants. Motivated by the advances in language modeling, empirical legal scholars are increasingly turning to prompting commercial models, hoping that it will alleviate the significant cost of human annotation. Despite growing use, our understanding of how to best utilize large language models for legal annotation remains limited. To bridge this gap, we introduce CaselawQA, a benchmark comprising 260 legal annotation tasks, nearly all new to the machine learning community. We demonstrate that commercial models, such as GPT-4.5 and Claude 3.7 Sonnet, achieve non-trivial yet highly variable accuracy, generally falling short of the performance required for legal work. We then demonstrate that small, lightly fine-tuned models outperform commercial models. A few hundred to a thousand labeled examples are usually enough to achieve higher accuracy. Our work points to a viable alternative to the predominant practice of prompting commercial models. For concrete legal annotation tasks with some available labeled data, researchers are likely better off using a fine-tuned open-source model.