Memo: Training Memory-Efficient Embodied Agents with Reinforcement Learning

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Embodied agents struggle to maintain contextual awareness in long-horizon, high-visual-input tasks due to the fixed context window of Transformers or insufficient memory compression in recurrent models. Method: We propose Memo, a Transformer-based architecture that introduces periodic summary tokens to enable automatic memory compression, storage, and on-demand retrieval—integrated end-to-end within the model without external memory modules. Memory creation and retrieval are jointly optimized via reinforcement learning. Contribution/Results: Evaluated on grid-world meta-RL and photorealistic indoor multi-object navigation benchmarks, Memo significantly outperforms long-context baselines. It exhibits superior generalization to longer sequences at inference time and maintains robust performance under streaming input conditions, achieving a favorable trade-off between long-term memory capacity and computational efficiency.

Technology Category

Application Category

📝 Abstract
To enable embodied agents to operate effectively over extended timeframes, it is crucial to develop models that form and access memories to stay contextualized in their environment. In the current paradigm of training transformer-based policies for embodied sequential decision-making tasks, visual inputs often overwhelm the context limits of transformers, while humans can maintain and utilize a lifetime of experience compressed as memories. Significant compression is possible in principle, as much of the input is irrelevant and can be abstracted. However, existing approaches predominantly focus on either recurrent models with fixed-size memory or transformers with full-context reliance. In this work, we propose Memo, a transformer-based architecture and training recipe for reinforcement learning (RL) on memory-intensive, long-horizon tasks. Memo incorporates the creation and retrieval of memory by interleaving periodic summarization tokens with the inputs of a model during training. We demonstrate Memo's effectiveness on a gridworld meta-RL benchmark and a multi-object navigation task in photo-realistic indoor settings. Memo outperforms naive long-context transformer baselines while being more compute and storage efficient. Additionally, Memo generalizes better to longer contexts at inference time and remains robust in streaming settings, where historical context must be truncated to fit inference constraints.
Problem

Research questions and friction points this paper is trying to address.

Enabling embodied agents to operate over extended timeframes
Developing memory-efficient models for long-horizon reinforcement learning tasks
Addressing visual input overload in transformer-based policies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer architecture with periodic summarization tokens
Memory creation and retrieval for long-horizon tasks
Compute-efficient reinforcement learning with truncated context
🔎 Similar Papers
No similar papers found.