MemPO: Self-Memory Policy Optimization for Long-Horizon Agents

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation and training instability faced by long-horizon agents due to context inflation during prolonged interactions. To tackle this, the authors propose Memory-based Policy Optimization (MemPO), a novel algorithm that eliminates external memory modules and instead enables agents to autonomously summarize, compress, and selectively retain critical contextual information. MemPO aligns memory management with task objectives through a credit assignment mechanism grounded in memory effectiveness. Empirical results demonstrate that the method not only maintains task performance but also improves the F1 score by 25.98% over baseline approaches and surpasses the previous state-of-the-art by 7.1%, while substantially reducing token consumption by 67.58%–73.12%.

Technology Category

Application Category

📝 Abstract
Long-horizon agents face the challenge of growing context size during interaction with environment, which degrades the performance and stability. Existing methods typically introduce the external memory module and look up the relevant information from the stored memory, which prevents the model itself from proactively managing its memory content and aligning with the agent's overarching task objectives. To address these limitations, we propose the self-memory policy optimization algorithm (MemPO), which enables the agent (policy model) to autonomously summarize and manage their memory during interaction with environment. By improving the credit assignment mechanism based on memory effectiveness, the policy model can selectively retain crucial information, significantly reducing token consumption while preserving task performance. Extensive experiments and analyses confirm that MemPO achieves absolute F1 score gains of 25.98% over the base model and 7.1% over the previous SOTA baseline, while reducing token usage by 67.58% and 73.12%.
Problem

Research questions and friction points this paper is trying to address.

long-horizon agents
context growth
memory management
credit assignment
token efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-memory
policy optimization
long-horizon agents
credit assignment
token efficiency
🔎 Similar Papers
No similar papers found.
R
Ruoran Li
Tsinghua University
Xinghua Zhang
Xinghua Zhang
Tongyi Lab, Alibaba Group
Large Language ModelLow ResourceInformation Extraction
H
Haiyang Yu
Tongyi Lab, Alibaba Group
S
Shitong Duan
Tongyi Lab, Alibaba Group
Xiang Li
Xiang Li
Alibaba Group, Meituan
Recommender SystemAdvertisingArtificial IntelligenceComputation and Language
W
Wenxin Xiang
Tsinghua University
C
Chonghua Liao
Tsinghua University
X
Xudong Guo
Tongyi Lab, Alibaba Group
Y
Yongbin Li
Tongyi Lab, Alibaba Group
Jinli Suo
Jinli Suo
Tsinghua University
Computer VisionComputational PhotographyComputational Imaging