🤖 AI Summary
Existing legal large language models (LLMs) commonly rely on intuitive, single-step reasoning, lacking verifiable and interpretable multi-step legal argumentation—limiting their deployment in high-reliability judicial scenarios. To address this, we propose a preference-free reinforcement learning framework that explicitly models and optimizes legal reasoning processes. Our method introduces a chain-of-thought-guided information gain mechanism, dual-mode input differential training, and a multidimensional reward function jointly measuring logical rigor and legal domain expertise. We further integrate knowledge distillation from DeepSeek-R1 and contrastive learning to enhance reasoning fidelity and transparency. Experimental results demonstrate significant improvements over strong baselines across multiple legal reasoning benchmarks: our approach achieves higher accuracy while generating more robust, interpretable, and jurisprudentially sound judgments—advancing both performance and accountability in legal AI systems.
📝 Abstract
Legal Artificial Intelligence (LegalAI) has achieved notable advances in automating judicial decision-making with the support of Large Language Models (LLMs). However, existing legal LLMs still struggle to generate reliable and interpretable reasoning processes. They often default to fast-thinking behavior by producing direct answers without explicit multi-step reasoning, limiting their effectiveness in complex legal scenarios that demand rigorous justification. To address this challenge, we propose Legal$Δ$, a reinforcement learning framework designed to enhance legal reasoning through chain-of-thought guided information gain. During training, Legal$Δ$ employs a dual-mode input setup-comprising direct answer and reasoning-augmented modes-and maximizes the information gain between them. This encourages the model to acquire meaningful reasoning patterns rather than generating superficial or redundant explanations. Legal$Δ$ follows a two-stage approach: (1) distilling latent reasoning capabilities from a powerful Large Reasoning Model (LRM), DeepSeek-R1, and (2) refining reasoning quality via differential comparisons, combined with a multidimensional reward mechanism that assesses both structural coherence and legal-domain specificity. Experimental results on multiple legal reasoning tasks demonstrate that Legal$Δ$ outperforms strong baselines in both accuracy and interpretability. It consistently produces more robust and trustworthy legal judgments without relying on labeled preference data. All code and data will be released at https://github.com/NEUIR/LegalDelta.