MOOM: Maintenance, Organization and Optimization of Memory in Ultra-Long Role-Playing Dialogues

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address uncontrolled memory growth and degraded coherence in ultra-long role-playing dialogues, this paper proposes MOOM, a dual-branch memory plugin. Methodologically, it pioneers the integration of literary theory into dialogue modeling by establishing parallel branches for plot progression and character development; incorporates psychological competitive inhibition theory to design a dynamic forgetting mechanism; and synergistically combines multi-timescale summarization, user-specific feature extraction, and a lightweight scheduling strategy to jointly achieve controllable memory capacity and long-term coherence. Experiments on our newly constructed Chinese ultra-long dialogue dataset ZH-4O (average 600 turns) demonstrate that MOOM significantly reduces LLM invocation frequency while improving memory efficiency and dialogue consistency, outperforming existing baseline methods.

Technology Category

Application Category

📝 Abstract
Memory extraction is crucial for maintaining coherent ultra-long dialogues in human-robot role-playing scenarios. However, existing methods often exhibit uncontrolled memory growth. To address this, we propose MOOM, the first dual-branch memory plugin that leverages literary theory by modeling plot development and character portrayal as core storytelling elements. Specifically, one branch summarizes plot conflicts across multiple time scales, while the other extracts the user's character profile. MOOM further integrates a forgetting mechanism, inspired by the ``competition-inhibition'' memory theory, to constrain memory capacity and mitigate uncontrolled growth. Furthermore, we present ZH-4O, a Chinese ultra-long dialogue dataset specifically designed for role-playing, featuring dialogues that average 600 turns and include manually annotated memory information. Experimental results demonstrate that MOOM outperforms all state-of-the-art memory extraction methods, requiring fewer large language model invocations while maintaining a controllable memory capacity.
Problem

Research questions and friction points this paper is trying to address.

Managing uncontrolled memory growth in ultra-long dialogues
Extracting plot and character elements for coherent storytelling
Maintaining controllable memory capacity with fewer model calls
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-branch memory plugin design
Plot conflict and character profile extraction
Competition-inhibition forgetting mechanism integration
🔎 Similar Papers
No similar papers found.
W
Weishu Chen
Beijing University of Posts and Telecommunications
J
Jinyi Tang
SenseTime
Z
Zhouhui Hou
SenseTime
Shihao Han
Shihao Han
The Univeristy of Hong Kong
M
Mingjie Zhan
SenseTime
Z
Zhiyuan Huang
SenseTime
D
Delong Liu
Beijing University of Posts and Telecommunications
Jiawei Guo
Jiawei Guo
Bupt & M-A-P
LLM MLLM
Zhicheng Zhao
Zhicheng Zhao
Associate Professor at the School of Artificial Intelligence, Anhui University
Computer Vision
F
Fei Su
Beijing University of Posts and Telecommunications, Beijing Key Laboratory of Network System and Network Culture