HaluMem: Evaluating Hallucinations in Memory Systems of Agents

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of localizing and evaluating hallucinations—i.e., factual inaccuracies, contradictions, omissions, and fabrications—in AI agent memory systems. To this end, we propose HaluMem, the first operation-level hallucination benchmark for memory systems, which decomposes the memory lifecycle into three fine-grained stages: retrieval, update, and question answering, enabling granular hallucination analysis under million-token contexts. We introduce two large-scale evaluation datasets—HaluMem-Medium and HaluMem-Long—incorporating multi-turn human-agent interactions, explicit memory state annotations, and typologically diverse queries. Empirical evaluation reveals that hallucinations predominantly originate and accumulate during retrieval and update operations, substantially degrading downstream QA accuracy; moreover, existing memory systems lack interpretable, operation-aware constraints. This work establishes a new paradigm for trustworthy evaluation and controllable optimization of memory systems, providing foundational benchmarks and methodological tools for rigorous hallucination analysis.

Technology Category

Application Category

📝 Abstract
Memory systems are key components that enable AI systems such as LLMs and AI agents to achieve long-term learning and sustained interaction. However, during memory storage and retrieval, these systems frequently exhibit memory hallucinations, including fabrication, errors, conflicts, and omissions. Existing evaluations of memory hallucinations are primarily end-to-end question answering, which makes it difficult to localize the operational stage within the memory system where hallucinations arise. To address this, we introduce the Hallucination in Memory Benchmark (HaluMem), the first operation level hallucination evaluation benchmark tailored to memory systems. HaluMem defines three evaluation tasks (memory extraction, memory updating, and memory question answering) to comprehensively reveal hallucination behaviors across different operational stages of interaction. To support evaluation, we construct user-centric, multi-turn human-AI interaction datasets, HaluMem-Medium and HaluMem-Long. Both include about 15k memory points and 3.5k multi-type questions. The average dialogue length per user reaches 1.5k and 2.6k turns, with context lengths exceeding 1M tokens, enabling evaluation of hallucinations across different context scales and task complexities. Empirical studies based on HaluMem show that existing memory systems tend to generate and accumulate hallucinations during the extraction and updating stages, which subsequently propagate errors to the question answering stage. Future research should focus on developing interpretable and constrained memory operation mechanisms that systematically suppress hallucinations and improve memory reliability.
Problem

Research questions and friction points this paper is trying to address.

Evaluating hallucinations in AI memory systems during storage and retrieval
Localizing hallucination origins in memory extraction and updating stages
Assessing memory reliability across multi-turn interactions with long contexts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces operation level hallucination evaluation benchmark
Defines three evaluation tasks for memory systems
Constructs multi-turn human-AI interaction datasets for testing
🔎 Similar Papers
No similar papers found.
Ding Chen
Ding Chen
Postdoctoral Scholar, University of Texas Southwestern Medical Center
S
Simin Niu
MemTensor (Shanghai) Technology
K
Kehang Li
MemTensor (Shanghai) Technology
P
Peng Liu
MemTensor (Shanghai) Technology
X
Xiangping Zheng
Harbin Engineering University
B
Bo Tang
MemTensor (Shanghai) Technology
X
Xinchi Li
China Telecom Research Institute
Feiyu Xiong
Feiyu Xiong
MemTensor (Shanghai) Technology Co., Ltd.
Machine LearningNLPLLM
Zhiyu Li
Zhiyu Li
Tianjin University
Robust controlattitude control