JuDGE: Benchmarking Judgment Document Generation for Chinese Legal System

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of quality evaluation metrics for automated judicial verdict generation in the Chinese legal context by introducing the first benchmark specifically designed for Chinese judicial systems. Methodologically, it constructs a three-source integrated dataset—comprising authentic case records, statutory provisions, and historical verdicts—and develops a multidimensional automated evaluation framework co-designed with legal domain experts. The framework incorporates retrieval-augmented generation (RAG) with multi-source legal corpora, domain-specific fine-tuning, and few-shot in-context learning. Key contributions include: (1) establishing the first evaluation standard for generative AI in the judicial domain; (2) open-sourcing a high-quality dataset and fully reproducible codebase; and (3) demonstrating that RAG substantially improves factual accuracy and legal grounding, while revealing persistent challenges in complex legal reasoning and strict adherence to formal verdict structure and formatting conventions.

Technology Category

Application Category

📝 Abstract
This paper introduces JuDGE (Judgment Document Generation Evaluation), a novel benchmark for evaluating the performance of judgment document generation in the Chinese legal system. We define the task as generating a complete legal judgment document from the given factual description of the case. To facilitate this benchmark, we construct a comprehensive dataset consisting of factual descriptions from real legal cases, paired with their corresponding full judgment documents, which serve as the ground truth for evaluating the quality of generated documents. This dataset is further augmented by two external legal corpora that provide additional legal knowledge for the task: one comprising statutes and regulations, and the other consisting of a large collection of past judgment documents. In collaboration with legal professionals, we establish a comprehensive automated evaluation framework to assess the quality of generated judgment documents across various dimensions. We evaluate various baseline approaches, including few-shot in-context learning, fine-tuning, and a multi-source retrieval-augmented generation (RAG) approach, using both general and legal-domain LLMs. The experimental results demonstrate that, while RAG approaches can effectively improve performance in this task, there is still substantial room for further improvement. All the codes and datasets are available at: https://github.com/oneal2000/JuDGE.
Problem

Research questions and friction points this paper is trying to address.

Evaluating judgment document generation in Chinese legal system
Generating legal judgments from factual case descriptions
Assessing document quality using automated evaluation framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed JuDGE benchmark for Chinese legal documents
Integrated external legal corpora for enhanced knowledge
Used multi-source retrieval-augmented generation approach
Weihang Su
Weihang Su
Tsinghua University
Information RetrievalNatural Language ProcessingAI for Legal
B
Baoqing Yue
DCST, Tsinghua University, Beijing 100084, China
Qingyao Ai
Qingyao Ai
Associate Professor, Dept. of CS&T, Tsinghua University
Information RetrievalMachine Learning
Y
Yiran Hu
DCST, Tsinghua University, Beijing 100084, China
J
Jiaqi Li
DCST, Tsinghua University, Beijing 100084, China
Changyue Wang
Changyue Wang
Tsinghua University
Information RetrievalLarge Language ModelsAI for Legal
K
Kaiyuan Zhang
DCST, Tsinghua University, Beijing 100084, China
Y
Yueyue Wu
DCST, Tsinghua University, Beijing 100084, China
Y
Yiqun Liu
DCST, Tsinghua University, Beijing 100084, China