🤖 AI Summary
Current persistent large language model agents lack effective memory governance mechanisms, rendering them vulnerable to contradictory information, privacy leaks, and outdated “zombie memories.” This work proposes MemArchitect—a memory governance layer decoupled from model weights—that introduces, for the first time, a rule-based memory lifecycle management framework. By leveraging a policy-driven rule engine, MemArchitect enables explicit control over memory decay, conflict resolution, and privacy preservation. Experimental results demonstrate that memories governed by MemArchitect significantly outperform unmanaged baselines in agent tasks, underscoring the critical role of structured memory governance in enhancing the reliability and safety of autonomous systems.
📝 Abstract
Persistent Large Language Model (LLM) agents expose a critical governance gap in memory management. Standard Retrieval-Augmented Generation (RAG) frameworks treat memory as passive storage, lacking mechanisms to resolve contradictions, enforce privacy, or prevent outdated information ("zombie memories") from contaminating the context window.
We introduce MemArchitect, a governance layer that decouples memory lifecycle management from model weights. MemArchitect enforces explicit, rule-based policies, including memory decay, conflict resolution, and privacy controls.
We demonstrate that governed memory consistently outperforms unmanaged memory in agentic settings, highlighting the necessity of structured memory governance for reliable and safe autonomous systems.