🤖 AI Summary
This study addresses the lack of quality evaluation metrics for automated judicial verdict generation in the Chinese legal context by introducing the first benchmark specifically designed for Chinese judicial systems. Methodologically, it constructs a three-source integrated dataset—comprising authentic case records, statutory provisions, and historical verdicts—and develops a multidimensional automated evaluation framework co-designed with legal domain experts. The framework incorporates retrieval-augmented generation (RAG) with multi-source legal corpora, domain-specific fine-tuning, and few-shot in-context learning. Key contributions include: (1) establishing the first evaluation standard for generative AI in the judicial domain; (2) open-sourcing a high-quality dataset and fully reproducible codebase; and (3) demonstrating that RAG substantially improves factual accuracy and legal grounding, while revealing persistent challenges in complex legal reasoning and strict adherence to formal verdict structure and formatting conventions.
📝 Abstract
This paper introduces JuDGE (Judgment Document Generation Evaluation), a novel benchmark for evaluating the performance of judgment document generation in the Chinese legal system. We define the task as generating a complete legal judgment document from the given factual description of the case. To facilitate this benchmark, we construct a comprehensive dataset consisting of factual descriptions from real legal cases, paired with their corresponding full judgment documents, which serve as the ground truth for evaluating the quality of generated documents. This dataset is further augmented by two external legal corpora that provide additional legal knowledge for the task: one comprising statutes and regulations, and the other consisting of a large collection of past judgment documents. In collaboration with legal professionals, we establish a comprehensive automated evaluation framework to assess the quality of generated judgment documents across various dimensions. We evaluate various baseline approaches, including few-shot in-context learning, fine-tuning, and a multi-source retrieval-augmented generation (RAG) approach, using both general and legal-domain LLMs. The experimental results demonstrate that, while RAG approaches can effectively improve performance in this task, there is still substantial room for further improvement. All the codes and datasets are available at: https://github.com/oneal2000/JuDGE.