🤖 AI Summary
This work addresses the performance degradation and training instability faced by long-horizon agents due to context inflation during prolonged interactions. To tackle this, the authors propose Memory-based Policy Optimization (MemPO), a novel algorithm that eliminates external memory modules and instead enables agents to autonomously summarize, compress, and selectively retain critical contextual information. MemPO aligns memory management with task objectives through a credit assignment mechanism grounded in memory effectiveness. Empirical results demonstrate that the method not only maintains task performance but also improves the F1 score by 25.98% over baseline approaches and surpasses the previous state-of-the-art by 7.1%, while substantially reducing token consumption by 67.58%–73.12%.
📝 Abstract
Long-horizon agents face the challenge of growing context size during interaction with environment, which degrades the performance and stability. Existing methods typically introduce the external memory module and look up the relevant information from the stored memory, which prevents the model itself from proactively managing its memory content and aligning with the agent's overarching task objectives. To address these limitations, we propose the self-memory policy optimization algorithm (MemPO), which enables the agent (policy model) to autonomously summarize and manage their memory during interaction with environment. By improving the credit assignment mechanism based on memory effectiveness, the policy model can selectively retain crucial information, significantly reducing token consumption while preserving task performance. Extensive experiments and analyses confirm that MemPO achieves absolute F1 score gains of 25.98% over the base model and 7.1% over the previous SOTA baseline, while reducing token usage by 67.58% and 73.12%.