🤖 AI Summary
To address context inconsistency and insufficient dynamic personalization in long-horizon interactions of LLM-based agents, this paper proposes OmniMemory—a novel memory system that replaces conventional semantic-clustering retrieval with an active user modeling mechanism. OmniMemory dynamically extracts, incrementally updates, and hierarchically retrieves user representations across two orthogonal dimensions: personality attributes and topic-specific contextual records. It leverages LLM-driven proactive behavioral analysis and self-evolving memory management to jointly optimize interaction coherence and adaptive responsiveness. Evaluated on the LoCoMo and PERSONAMEM benchmarks, OmniMemory achieves 51.76% and 62.99% accuracy, respectively—outperforming state-of-the-art methods by approximately 3%. Moreover, it significantly reduces token consumption and response latency, demonstrating improved efficiency without sacrificing performance.
📝 Abstract
Recent advancements in LLM-powered agents have demonstrated significant potential in generating human-like responses; however, they continue to face challenges in maintaining long-term interactions within complex environments, primarily due to limitations in contextual consistency and dynamic personalization. Existing memory systems often depend on semantic grouping prior to retrieval, which can overlook semantically irrelevant yet critical user information and introduce retrieval noise. In this report, we propose the initial design of O-Mem, a novel memory framework based on active user profiling that dynamically extracts and updates user characteristics and event records from their proactive interactions with agents. O-Mem supports hierarchical retrieval of persona attributes and topic-related context, enabling more adaptive and coherent personalized responses. O-Mem achieves 51.76% on the public LoCoMo benchmark, a nearly 3% improvement upon LangMem,the previous state-of-the-art, and it achieves 62.99% on PERSONAMEM, a 3.5% improvement upon A-Mem,the previous state-of-the-art. O-Mem also boosts token and interaction response time efficiency compared to previous memory frameworks. Our work opens up promising directions for developing efficient and human-like personalized AI assistants in the future.