CogniBench: A Legal-inspired Framework and Dataset for Assessing Cognitive Faithfulness of Large Language Models

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM evaluation benchmarks overlook “cognitive statements”—context-dependent inferential conclusions—and focus solely on verbatim “factual statements,” rendering cognitive hallucinations difficult to detect and mitigate. Method: We propose CogniBench, the first legal-inspired evaluation framework for cognitive faithfulness: (1) it formally distinguishes factual from cognitive statements and defines multi-level faithfulness criteria grounded in judicial evidentiary logic; (2) it introduces a hybrid annotation pipeline combining crowdsourcing with rule-based refinement to construct two benchmark datasets—CogniBench (small-scale, high-quality) and CogniBench-L (large-scale); (3) it trains and open-sources a dedicated cognitive hallucination detection model. Results: Experiments reveal systematic cognitive hallucinations across mainstream LLMs; CogniBench-L significantly improves detection accuracy. Our work establishes a novel paradigm and foundational infrastructure for aligning LLMs with faithful, legally grounded reasoning.

Technology Category

Application Category

📝 Abstract
Faithfulness hallucination are claims generated by a Large Language Model (LLM) not supported by contexts provided to the LLM. Lacking assessment standard, existing benchmarks only contain"factual statements"that rephrase source materials without marking"cognitive statements"that make inference from the given context, making the consistency evaluation and optimization of cognitive statements difficult. Inspired by how an evidence is assessed in the legislative domain, we design a rigorous framework to assess different levels of faithfulness of cognitive statements and create a benchmark dataset where we reveal insightful statistics. We design an annotation pipeline to create larger benchmarks for different LLMs automatically, and the resulting larger-scale CogniBench-L dataset can be used to train accurate cognitive hallucination detection model. We release our model and dataset at: https://github.com/FUTUREEEEEE/CogniBench
Problem

Research questions and friction points this paper is trying to address.

Assessing cognitive faithfulness of LLMs lacks standards
Existing benchmarks miss cognitive statement evaluations
Need framework for detecting cognitive hallucination in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Legal-inspired framework for cognitive faithfulness assessment
Automated annotation pipeline for benchmark dataset creation
Large-scale dataset for hallucination detection model training
🔎 Similar Papers
No similar papers found.
Xiaqiang Tang
Xiaqiang Tang
HKUST(GZ)
LLMRAGTrustworthy AI
J
Jian Li
Hunyuan AI Digital Human, Tencent
K
Keyu Hu
The Hong Kong University of Science and Technology (Guangzhou)
D
Du Nan
Hunyuan AI Digital Human, Tencent
X
Xi Zhang
Beijing University of Posts and Telecommunications
Weigao Sun
Weigao Sun
Research Scientist, Shanghai AI Laboratory
LLMDeep LearningOptimization
Sihong Xie
Sihong Xie
Associate Professor at AI Thrust, Information Hub, HKUST-GZ
data miningmachine learning