SyLeR: A Framework for Explicit Syllogistic Legal Reasoning in Large Language Models

📅 2025-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) lack explicit syllogistic reasoning capabilities in legal domains, producing implicit, unstructured outputs with limited interpretability and trustworthiness. To address this, we propose SyLeR—a novel framework establishing the first explicit syllogism-oriented legal reasoning paradigm. SyLeR constructs the major premise via hierarchical tree-based retrieval that jointly integrates statutory provisions and case law; it employs a two-stage optimization strategy comprising supervised fine-tuning initialization followed by structure-aware reinforcement learning with custom rewards. The framework demonstrates strong generalization across languages (Chinese/French), user groups (legal professionals/general public), and model backbones. Experiments show that SyLeR significantly improves reasoning accuracy while generating syllogistic legal answers that are structurally transparent, premise-verifiable, and conclusion-traceable—thereby enhancing both credibility and explainability.

Technology Category

Application Category

📝 Abstract
Syllogistic reasoning is a fundamental aspect of legal decision-making, enabling logical conclusions by connecting general legal principles with specific case facts. Although existing large language models (LLMs) can generate responses to legal questions, they fail to perform explicit syllogistic reasoning, often producing implicit and unstructured answers that lack explainability and trustworthiness. To address this limitation, we propose SyLeR, a novel framework that empowers LLMs to engage in explicit syllogistic legal reasoning. SyLeR integrates a tree-structured hierarchical retrieval mechanism to effectively combine relevant legal statutes and precedent cases, forming comprehensive major premises. This is followed by a two-stage fine-tuning process: supervised fine-tuning warm-up establishes a foundational understanding of syllogistic reasoning, while reinforcement learning with a structure-aware reward mechanism refines the ability of the model to generate diverse logically sound and well-structured reasoning paths. We conducted extensive experiments across various dimensions, including in-domain and cross-domain user groups (legal laypersons and practitioners), multiple languages (Chinese and French), and different LLM backbones (legal-specific and open-domain LLMs). The results show that SyLeR significantly improves response accuracy and consistently delivers explicit, explainable, and trustworthy legal reasoning.
Problem

Research questions and friction points this paper is trying to address.

Enabling explicit syllogistic reasoning in legal LLMs
Improving explainability and trustworthiness of legal answers
Combining legal statutes and cases for structured reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tree-structured hierarchical retrieval for legal premises
Two-stage fine-tuning for syllogistic reasoning
Structure-aware reward mechanism for diverse reasoning
🔎 Similar Papers
No similar papers found.
Kepu Zhang
Kepu Zhang
Renmin University of China
SearchLLMRecommendationLegal AI
W
Weijie Yu
School of Information Technology and Management, University of International Business and Economics, Beijing, China
Zhongxiang Sun
Zhongxiang Sun
Renmin University of China
SearchRecommendationLLMLegal
J
Jun Xu
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China