π€ AI Summary
Existing legal large language model (LLM) evaluation benchmarks are overly simplified and fail to assess the nuanced reasoning and ambiguity-handling capabilities required in real-world legal practice. To address this gap, this work proposes PLawBenchβa benchmark grounded in authentic legal workflows, encompassing three core tasks: public legal consultation, case analysis, and legal document generation. It introduces, for the first time, a rubric-based fine-grained evaluation framework covering 13 practical scenarios and approximately 12,500 expert-designed scoring criteria. By integrating structured tasks, expert-annotated standards, and an LLM-based automatic evaluator aligned with human judgment, evaluations of ten state-of-the-art models reveal significant deficiencies in fine-grained legal reasoning, thereby demonstrating the effectiveness and necessity of the proposed benchmark.
π Abstract
As large language models (LLMs) are increasingly applied to legal domain-specific tasks, evaluating their ability to perform legal work in real-world settings has become essential. However, existing legal benchmarks rely on simplified and highly standardized tasks, failing to capture the ambiguity, complexity, and reasoning demands of real legal practice. Moreover, prior evaluations often adopt coarse, single-dimensional metrics and do not explicitly assess fine-grained legal reasoning. To address these limitations, we introduce PLawBench, a Practical Law Benchmark designed to evaluate LLMs in realistic legal practice scenarios. Grounded in real-world legal workflows, PLawBench models the core processes of legal practitioners through three task categories: public legal consultation, practical case analysis, and legal document generation. These tasks assess a model's ability to identify legal issues and key facts, perform structured legal reasoning, and generate legally coherent documents. PLawBench comprises 850 questions across 13 practical legal scenarios, with each question accompanied by expert-designed evaluation rubrics, resulting in approximately 12,500 rubric items for fine-grained assessment. Using an LLM-based evaluator aligned with human expert judgments, we evaluate 10 state-of-the-art LLMs. Experimental results show that none achieves strong performance on PLawBench, revealing substantial limitations in the fine-grained legal reasoning capabilities of current LLMs and highlighting important directions for future evaluation and development of legal LLMs. Data is available at: https://github.com/skylenage/PLawbench.