Omni Memory System for Personalized, Long Horizon, Self-Evolving Agents

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address context inconsistency and insufficient dynamic personalization in long-horizon interactions of LLM-based agents, this paper proposes OmniMemory—a novel memory system that replaces conventional semantic-clustering retrieval with an active user modeling mechanism. OmniMemory dynamically extracts, incrementally updates, and hierarchically retrieves user representations across two orthogonal dimensions: personality attributes and topic-specific contextual records. It leverages LLM-driven proactive behavioral analysis and self-evolving memory management to jointly optimize interaction coherence and adaptive responsiveness. Evaluated on the LoCoMo and PERSONAMEM benchmarks, OmniMemory achieves 51.76% and 62.99% accuracy, respectively—outperforming state-of-the-art methods by approximately 3%. Moreover, it significantly reduces token consumption and response latency, demonstrating improved efficiency without sacrificing performance.

Technology Category

Application Category

📝 Abstract
Recent advancements in LLM-powered agents have demonstrated significant potential in generating human-like responses; however, they continue to face challenges in maintaining long-term interactions within complex environments, primarily due to limitations in contextual consistency and dynamic personalization. Existing memory systems often depend on semantic grouping prior to retrieval, which can overlook semantically irrelevant yet critical user information and introduce retrieval noise. In this report, we propose the initial design of O-Mem, a novel memory framework based on active user profiling that dynamically extracts and updates user characteristics and event records from their proactive interactions with agents. O-Mem supports hierarchical retrieval of persona attributes and topic-related context, enabling more adaptive and coherent personalized responses. O-Mem achieves 51.76% on the public LoCoMo benchmark, a nearly 3% improvement upon LangMem,the previous state-of-the-art, and it achieves 62.99% on PERSONAMEM, a 3.5% improvement upon A-Mem,the previous state-of-the-art. O-Mem also boosts token and interaction response time efficiency compared to previous memory frameworks. Our work opens up promising directions for developing efficient and human-like personalized AI assistants in the future.
Problem

Research questions and friction points this paper is trying to address.

Maintaining long-term contextual consistency in LLM agents
Overcoming semantic grouping limitations in memory retrieval
Enhancing dynamic personalization through active user profiling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Active user profiling extracts dynamic user characteristics
Hierarchical retrieval enables adaptive personalized responses
Memory framework improves efficiency and benchmark performance
🔎 Similar Papers
No similar papers found.
P
Piaohong Wang
OPPO AI Agent Team
M
Motong Tian
OPPO AI Agent Team
J
Jiaxian Li
OPPO AI Agent Team
Y
Yuan Liang
OPPO AI Agent Team
Y
Yuqing Wang
OPPO AI Agent Team
Q
Qianben Chen
OPPO AI Agent Team
T
Tiannan Wang
OPPO AI Agent Team
Zhicong Lu
Zhicong Lu
Assistant Professor, George Mason University
HCIsocial computinglive streamingcreativity supportintangible cultural heritage
J
Jiawei Ma
OPPO AI Agent Team
Yuchen Eleanor Jiang
Yuchen Eleanor Jiang
OPPO
natural language processingmachine learning
Wangchunshu Zhou
Wangchunshu Zhou
OPPO & M-A-P
artificial general intelligencelanguage agentslarge language modelsnatural language processing