🤖 AI Summary
Current large language models (LLMs) lack a unified, structured memory architecture—relying solely on parametric memory (learned weights) and transient activation memory (contextual states)—while external memory approaches like retrieval-augmented generation (RAG) lack lifecycle management and multimodal support, hindering long-term knowledge evolution. To address this, we propose MemOS, a memory operating system for LLMs that elevates memory to a first-class runtime resource, unifying parametric, activation-state, and plaintext memory. Its core contributions are: (1) a standardized memory abstraction, MemCube; (2) end-to-end memory lifecycle governance and cross-modal integration; and (3) a centralized memory execution framework. Experiments demonstrate that MemOS significantly enhances LLMs’ capabilities in long-term knowledge evolution, personalized adaptation, and cross-platform collaboration. By enabling persistent, structured, and multimodal memory management, MemOS establishes a foundational paradigm for continual learning and AGI infrastructure.
📝 Abstract
Large Language Models (LLMs) have emerged as foundational infrastructure in the pursuit of Artificial General Intelligence (AGI). Despite their remarkable capabilities in language perception and generation, current LLMs fundamentally lack a unified and structured architecture for handling memory. They primarily rely on parametric memory (knowledge encoded in model weights) and ephemeral activation memory (context-limited runtime states). While emerging methods like Retrieval-Augmented Generation (RAG) incorporate plaintext memory, they lack lifecycle management and multi-modal integration, limiting their capacity for long-term knowledge evolution. To address this, we introduce MemOS, a memory operating system designed for LLMs that, for the first time, elevates memory to a first-class operational resource. It builds unified mechanisms for representation, organization, and governance across three core memory types: parametric, activation, and plaintext. At its core is the MemCube, a standardized memory abstraction that enables tracking, fusion, and migration of heterogeneous memory, while offering structured, traceable access across tasks and contexts. MemOS establishes a memory-centric execution framework with strong controllability, adaptability, and evolvability. It fills a critical gap in current LLM infrastructure and lays the groundwork for continual adaptation, personalized intelligence, and cross-platform coordination in next-generation intelligent systems.