EverMemOS: A Self-Organizing Memory Operating System for Structured Long-Horizon Reasoning

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that large language models, constrained by finite context windows, struggle to maintain coherent behavior over long-term interactions due to existing memory systems predominantly storing isolated records without effectively modeling user state evolution or resolving conflicts. To overcome this limitation, the paper proposes a neuroscience-inspired self-organizing memory operating system that enables structured long-term reasoning through a three-stage process: episodic memory unit generation, semantic integration, and reconstructive recall. Key innovations include the MemCell and MemScene architectures, time-bounded Foresight signals, and a theme-driven mechanism for organizing memory scenes, collectively supporting dynamic user profile updates and conflict resolution. The system achieves state-of-the-art performance on LoCoMo and LongMemEval benchmarks and demonstrates superior capabilities in user modeling and proactive dialogue, as validated by evaluations on PersonaMem v2 and case studies.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly deployed as long-term interactive agents, yet their limited context windows make it difficult to sustain coherent behavior over extended interactions. Existing memory systems often store isolated records and retrieve fragments, limiting their ability to consolidate evolving user states and resolve conflicts. We introduce EverMemOS, a self-organizing memory operating system that implements an engram-inspired lifecycle for computational memory. Episodic Trace Formation converts dialogue streams into MemCells that capture episodic traces, atomic facts, and time-bounded Foresight signals. Semantic Consolidation organizes MemCells into thematic MemScenes, distilling stable semantic structures and updating user profiles. Reconstructive Recollection performs MemScene-guided agentic retrieval to compose the necessary and sufficient context for downstream reasoning. Experiments on LoCoMo and LongMemEval show that EverMemOS achieves state-of-the-art performance on memory-augmented reasoning tasks. We further report a profile study on PersonaMem v2 and qualitative case studies illustrating chat-oriented capabilities such as user profiling and Foresight. Code is available at https://github.com/EverMind-AI/EverMemOS.
Problem

Research questions and friction points this paper is trying to address.

long-horizon reasoning
memory systems
context window limitation
user state consolidation
coherent behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-organizing memory
engram-inspired lifecycle
MemCell
semantic consolidation
reconstructive recollection
🔎 Similar Papers
No similar papers found.
C
Chuanrui Hu
EverMind, Shanda Group
X
Xingze Gao
EverMind, Shanda Group
Zuyi Zhou
Zuyi Zhou
Institute of Automation, Chinese Academy of Sciences
Deep LearningCross-modal reasoning.
Dannong Xu
Dannong Xu
The University of Sydney
Multimodal Learning
Y
Yi Bai
EverMind, Shanda Group
X
Xintong Li
Evermind, Shanda Group
H
Hui Zhang
EverMind, Shanda Group
Tong Li
Tong Li
Lenovo
Computer architectureoperating systems
C
Chong Zhang
Shanda Group
L
LI Bing
Shanda Group
Yafeng Deng
Yafeng Deng
baidu
Large language modelLong term memoryContinue learning