Hybrid Retrieval-Augmented Generation Agent for Trustworthy Legal Question Answering in Judicial Forensics

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address critical judicial evidence challenges in legal question answering—including severe hallucination, knowledge obsolescence, and non-auditable answers—this paper proposes a retrieval-first, multi-model fusion framework. The framework integrates retrieval-augmented generation (RAG), collaborative generation across multiple large language models (LLMs), a domain-specialized selector for answer ranking, and a human feedback–driven closed-loop mechanism to ensure answer veracity, traceable provenance, and dynamic knowledge base evolution. Its key innovations include (1) feeding human verification outcomes back into knowledge base updates, and (2) explicitly modeling legal domain expertise via the selector to suppress hallucination. Evaluated on the Law_QA benchmark, the framework achieves significant improvements over single-LLM baselines and conventional RAG in F1 score, ROUGE-L, and LLM-as-a-Judge metrics—demonstrating comprehensive gains in legal compliance, factual accuracy, and system trustworthiness.

Technology Category

Application Category

📝 Abstract
As artificial intelligence permeates judicial forensics, ensuring the veracity and traceability of legal question answering (QA) has become critical. Conventional large language models (LLMs) are prone to hallucination, risking misleading guidance in legal consultation, while static knowledge bases struggle to keep pace with frequently updated statutes and case law. We present a hybrid legal QA agent tailored for judicial settings that integrates retrieval-augmented generation (RAG) with multi-model ensembling to deliver reliable, auditable, and continuously updatable counsel. The system prioritizes retrieval over generation: when a trusted legal repository yields relevant evidence, answers are produced via RAG; otherwise, multiple LLMs generate candidates that are scored by a specialized selector, with the top-ranked answer returned. High-quality outputs then undergo human review before being written back to the repository, enabling dynamic knowledge evolution and provenance tracking. Experiments on the Law_QA dataset show that our hybrid approach significantly outperforms both a single-model baseline and a vanilla RAG pipeline on F1, ROUGE-L, and an LLM-as-a-Judge metric. Ablations confirm the complementary contributions of retrieval prioritization, model ensembling, and the human-in-the-loop update mechanism. The proposed system demonstrably reduces hallucination while improving answer quality and legal compliance, advancing the practical landing of media forensics technologies in judicial scenarios.
Problem

Research questions and friction points this paper is trying to address.

Ensuring veracity and traceability in legal question answering systems
Overcoming hallucination risks in conventional large language models
Addressing outdated knowledge in static legal knowledge bases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid RAG with multi-model ensembling for legal QA
Prioritizes retrieval over generation for reliable answers
Human-in-the-loop updates enable dynamic knowledge evolution
🔎 Similar Papers
No similar papers found.
Y
Yueqing Xi
Department of Electronic Engineering, City University of Hong Kong(DongGuan), DongGuan
Y
Yifan Bai
Department of Electrical and Information Engineering, Tianjin University, Tianjin
H
Huasen Luo
Department of Electronic Engineering, City University of Hong Kong(DongGuan), DongGuan
W
Weiliang Wen
Department of Electronic Engineering, City University of Hong Kong, HongKong
H
Hui Liu
Department of Electronic Engineering, City University of Hong Kong, HongKong
Haoliang Li
Haoliang Li
Department of Electrical Engineering, City University of Hong Kong
AI SecurityInformation Forensics and SecurityMachine Learning