Choosing How to Remember: Adaptive Memory Structures for LLM Agents

📅 2026-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing memory systems for large language model (LLM) agents typically employ a single, fixed structure, limiting their adaptability in diverse long-horizon interaction scenarios. To address this limitation, this work proposes FluxMem, a novel framework that formulates memory structure selection as a context-adaptive decision problem. FluxMem introduces a three-tiered memory hierarchy coupled with a Beta mixture–based probabilistic gating mechanism to enable distribution-aware dynamic memory fusion. By leveraging a multi-structure memory bank and feedback from downstream response quality, the framework learns to select the optimal memory organization strategy tailored to varying interaction contexts. Evaluated on two long-horizon benchmarks—PERSONAMEM and LoCoMo—FluxMem achieves average performance improvements of 9.18% and 6.14%, respectively.

Technology Category

Application Category

📝 Abstract
Memory is critical for enabling large language model (LLM) based agents to maintain coherent behavior over long-horizon interactions. However, existing agent memory systems suffer from two key gaps: they rely on a one-size-fits-all memory structure and do not model memory structure selection as a context-adaptive decision, limiting their ability to handle heterogeneous interaction patterns and resulting in suboptimal performance. We propose a unified framework, FluxMem, that enables adaptive memory organization for LLM agents. Our framework equips agents with multiple complementary memory structures. It explicitly learns to select among these structures based on interaction-level features, using offline supervision derived from downstream response quality and memory utilization. To support robust long-horizon memory evolution, we further introduce a three-level memory hierarchy and a Beta Mixture Model-based probabilistic gate for distribution-aware memory fusion, replacing brittle similarity thresholds. Experiments on two long-horizon benchmarks, PERSONAMEM and LoCoMo, demonstrate that our method achieves average improvements of 9.18% and 6.14%.
Problem

Research questions and friction points this paper is trying to address.

adaptive memory
LLM agents
memory structure
long-horizon interaction
context-adaptive decision
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive memory
memory structure selection
LLM agents
memory hierarchy
probabilistic gating
🔎 Similar Papers
No similar papers found.