🤖 AI Summary
Current large language models (LLMs) lack explicit syllogistic reasoning capabilities in legal domains, producing implicit, unstructured outputs with limited interpretability and trustworthiness. To address this, we propose SyLeR—a novel framework establishing the first explicit syllogism-oriented legal reasoning paradigm. SyLeR constructs the major premise via hierarchical tree-based retrieval that jointly integrates statutory provisions and case law; it employs a two-stage optimization strategy comprising supervised fine-tuning initialization followed by structure-aware reinforcement learning with custom rewards. The framework demonstrates strong generalization across languages (Chinese/French), user groups (legal professionals/general public), and model backbones. Experiments show that SyLeR significantly improves reasoning accuracy while generating syllogistic legal answers that are structurally transparent, premise-verifiable, and conclusion-traceable—thereby enhancing both credibility and explainability.
📝 Abstract
Syllogistic reasoning is a fundamental aspect of legal decision-making, enabling logical conclusions by connecting general legal principles with specific case facts. Although existing large language models (LLMs) can generate responses to legal questions, they fail to perform explicit syllogistic reasoning, often producing implicit and unstructured answers that lack explainability and trustworthiness. To address this limitation, we propose SyLeR, a novel framework that empowers LLMs to engage in explicit syllogistic legal reasoning. SyLeR integrates a tree-structured hierarchical retrieval mechanism to effectively combine relevant legal statutes and precedent cases, forming comprehensive major premises. This is followed by a two-stage fine-tuning process: supervised fine-tuning warm-up establishes a foundational understanding of syllogistic reasoning, while reinforcement learning with a structure-aware reward mechanism refines the ability of the model to generate diverse logically sound and well-structured reasoning paths. We conducted extensive experiments across various dimensions, including in-domain and cross-domain user groups (legal laypersons and practitioners), multiple languages (Chinese and French), and different LLM backbones (legal-specific and open-domain LLMs). The results show that SyLeR significantly improves response accuracy and consistently delivers explicit, explainable, and trustworthy legal reasoning.