Legal$Δ$: Enhancing Legal Reasoning in LLMs via Reinforcement Learning with Chain-of-Thought Guided Information Gain

📅 2025-08-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing legal large language models (LLMs) commonly rely on intuitive, single-step reasoning, lacking verifiable and interpretable multi-step legal argumentation—limiting their deployment in high-reliability judicial scenarios. To address this, we propose a preference-free reinforcement learning framework that explicitly models and optimizes legal reasoning processes. Our method introduces a chain-of-thought-guided information gain mechanism, dual-mode input differential training, and a multidimensional reward function jointly measuring logical rigor and legal domain expertise. We further integrate knowledge distillation from DeepSeek-R1 and contrastive learning to enhance reasoning fidelity and transparency. Experimental results demonstrate significant improvements over strong baselines across multiple legal reasoning benchmarks: our approach achieves higher accuracy while generating more robust, interpretable, and jurisprudentially sound judgments—advancing both performance and accountability in legal AI systems.

Technology Category

Application Category

📝 Abstract
Legal Artificial Intelligence (LegalAI) has achieved notable advances in automating judicial decision-making with the support of Large Language Models (LLMs). However, existing legal LLMs still struggle to generate reliable and interpretable reasoning processes. They often default to fast-thinking behavior by producing direct answers without explicit multi-step reasoning, limiting their effectiveness in complex legal scenarios that demand rigorous justification. To address this challenge, we propose Legal$Δ$, a reinforcement learning framework designed to enhance legal reasoning through chain-of-thought guided information gain. During training, Legal$Δ$ employs a dual-mode input setup-comprising direct answer and reasoning-augmented modes-and maximizes the information gain between them. This encourages the model to acquire meaningful reasoning patterns rather than generating superficial or redundant explanations. Legal$Δ$ follows a two-stage approach: (1) distilling latent reasoning capabilities from a powerful Large Reasoning Model (LRM), DeepSeek-R1, and (2) refining reasoning quality via differential comparisons, combined with a multidimensional reward mechanism that assesses both structural coherence and legal-domain specificity. Experimental results on multiple legal reasoning tasks demonstrate that Legal$Δ$ outperforms strong baselines in both accuracy and interpretability. It consistently produces more robust and trustworthy legal judgments without relying on labeled preference data. All code and data will be released at https://github.com/NEUIR/LegalDelta.
Problem

Research questions and friction points this paper is trying to address.

Enhancing legal reasoning in LLMs for reliable judicial decisions
Addressing lack of interpretable reasoning in legal AI systems
Improving complex legal scenario handling via structured reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning with chain-of-thought guidance
Dual-mode input for maximizing information gain
Multidimensional reward for legal reasoning quality
🔎 Similar Papers
No similar papers found.
Xin Dai
Xin Dai
Research Scientist, Visa Research
Data Mining
B
Buqiang Xu
Department of Computer Science and Technology, Northeastern University, Shenyang, China
Zhenghao Liu
Zhenghao Liu
Northeastern University
NLPInformation Retrieval
Yukun Yan
Yukun Yan
Tsinghua University
Large Language Model
Huiyuan Xie
Huiyuan Xie
University of Cambridge
Natural Language ProcessingAI and Law
Xiaoyuan Yi
Xiaoyuan Yi
Senior Researcher, Microsoft Research Asia
Natural Language GenerationSocietal AILarge Language ModelResponsible AI
S
Shuo Wang
Department of Computer Science and Technology, Institute for AI, Tsinghua University, Beijing, China
G
Ge Yu
Department of Computer Science and Technology, Northeastern University, Shenyang, China