Elevating Legal LLM Responses: Harnessing Trainable Logical Structures and Semantic Knowledge with Legal Reasoning

πŸ“… 2025-02-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models (LLMs) exhibit insufficient logical specificity and suffer from factual hallucinations in legal question answering. Existing retrieval-augmented generation (RAG) methods rely solely on semantic similarity, neglecting the structured logical reasoning essential for legal inference. To address this, we propose the first logic-semantic joint modeling paradigm, comprising trainable fact-rule chain prediction, Deep Structured Semantic Matching (DSSM), and logic-aware RAGβ€”enabling end-to-end co-modeling of logical structure and semantic knowledge. Our method employs reinforcement learning to optimize logical chain reasoning and integrates a structured context generation mechanism. Evaluated on real-world legal QA benchmarks, our framework significantly improves answer accuracy and credibility. Both automated metrics and human evaluation demonstrate consistent superiority over state-of-the-art approaches. This work establishes a novel, reliable reasoning paradigm for legal-domain LLMs.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) have achieved impressive results across numerous domains, yet they experience notable deficiencies in legal question-answering tasks. LLMs often generate generalized responses that lack the logical specificity required for expert legal advice and are prone to hallucination, providing answers that appear correct but are unreliable. Retrieval-Augmented Generation (RAG) techniques offer partial solutions to address this challenge, but existing approaches typically focus only on semantic similarity, neglecting the logical structure essential to legal reasoning. In this paper, we propose the Logical-Semantic Integration Model (LSIM), a novel supervised framework that bridges semantic and logical coherence. LSIM comprises three components: reinforcement learning predicts a structured fact-rule chain for each question, a trainable Deep Structured Semantic Model (DSSM) retrieves the most relevant candidate questions by integrating semantic and logical features, and in-context learning generates the final answer using the retrieved content. Our experiments on a real-world legal QA dataset-validated through both automated metrics and human evaluation-demonstrate that LSIM significantly enhances accuracy and reliability compared to existing methods.
Problem

Research questions and friction points this paper is trying to address.

Improves legal question-answering accuracy
Integrates logical and semantic coherence
Reduces hallucination in LLM responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trainable Logical Structures
Semantic Knowledge Integration
Reinforcement Learning Framework
πŸ”Ž Similar Papers