MemBuilder: Reinforcing LLMs for Long-Term Memory Construction via Attributed Dense Rewards

๐Ÿ“… 2026-01-09
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of maintaining consistency in long-term dialogues, where existing memory mechanisms struggle to effectively model the temporal evolution of historical states. To this end, the paper proposes MemBuilder, a reinforcement learningโ€“based framework for constructing multidimensional long-term memory. The approach generates dense intermediate rewards through conversation-level synthetic question answering and incorporates a contribution-aware gradient weighting mechanism to accurately attribute the impact of individual memory components. Experimental results demonstrate that MemBuilder, despite having only 4 billion parameters, outperforms current state-of-the-art closed-source models across multiple long-context dialogue benchmarks, exhibiting strong generalization capabilities and superior memory retention.

Technology Category

Application Category

๐Ÿ“ Abstract
Maintaining consistency in long-term dialogues remains a fundamental challenge for LLMs, as standard retrieval mechanisms often fail to capture the temporal evolution of historical states. While memory-augmented frameworks offer a structured alternative, current systems rely on static prompting of closed-source models or suffer from ineffective training paradigms with sparse rewards. We introduce MemBuilder, a reinforcement learning framework that trains models to orchestrate multi-dimensional memory construction with attributed dense rewards. MemBuilder addresses two key challenges: (1) Sparse Trajectory-Level Rewards: we employ synthetic session-level question generation to provide dense intermediate rewards across extended trajectories; and (2) Multi-Dimensional Memory Attribution: we introduce contribution-aware gradient weighting that scales policy updates based on each component's downstream impact. Experimental results show that MemBuilder enables a 4B-parameter model to outperform state-of-the-art closed-source baselines, exhibiting strong generalization across long-term dialogue benchmarks.
Problem

Research questions and friction points this paper is trying to address.

long-term memory
dialogue consistency
sparse rewards
memory-augmented LLMs
temporal evolution
Innovation

Methods, ideas, or system contributions that make the work stand out.

reinforcement learning
long-term memory
dense rewards
memory attribution
LLM alignment
Z
Zhiyu Shen
School of Computer Science and Engineering, Sun Yat-sen University
Ziming Wu
Ziming Wu
Hong Kong University of Science and Technology
F
Fuming Lai
Tencent Inc.
S
Shaobing Lian
Tencent Inc.
Yanghui Rao
Yanghui Rao
Sun Yat-sen University
Text MiningTopic ModelingRepresentation Learning