🤖 AI Summary
This work addresses the limitation of existing large language model agents, whose memory systems are typically confined to a single memory paradigm and struggle to achieve cross-paradigm alignment and fusion. To overcome this, we propose MemAdapter, a novel framework that enables unified alignment and zero-shot fusion of heterogeneous memory paradigms for the first time. MemAdapter integrates generative subgraph retrieval with a lightweight contrastive alignment module and employs a two-stage training strategy, achieving high performance while substantially reducing computational overhead. Experimental results demonstrate that MemAdapter outperforms five state-of-the-art memory systems across three benchmarks, requiring only 13 minutes for alignment on a single GPU and consuming less than 5% of the training compute cost of existing approaches.
📝 Abstract
Memory mechanism is a core component of LLM-based agents, enabling reasoning and knowledge discovery over long-horizon contexts. Existing agent memory systems are typically designed within isolated paradigms (e.g., explicit, parametric, or latent memory) with tightly coupled retrieval methods that hinder cross-paradigm generalization and fusion. In this work, we take a first step toward unifying heterogeneous memory paradigms within a single memory system. We propose MemAdapter, a memory retrieval framework that enables fast alignment across agent memory paradigms. MemAdapter adopts a two-stage training strategy: (1) training a generative subgraph retriever from the unified memory space, and (2) adapting the retriever to unseen memory paradigms by training a lightweight alignment module through contrastive learning. This design improves the flexibility for memory retrieval and substantially reduces alignment cost across paradigms. Comprehensive experiments on three public evaluation benchmarks demonstrate that the generative subgraph retriever consistently outperforms five strong agent memory systems across three memory paradigms and agent model scales. Notably, MemAdapter completes cross-paradigm alignment within 13 minutes on a single GPU, achieving superior performance over original memory retrievers with less than 5% of training compute. Furthermore, MemAdapter enables effective zero-shot fusion across memory paradigms, highlighting its potential as a plug-and-play solution for agent memory systems.