🤖 AI Summary
This work addresses the limitation of existing vision-language-action (VLA) models in closed-loop robotic control, which typically rely on the Markov assumption and struggle to leverage historical context for tasks requiring long-term memory. To overcome this, the authors propose a memory-augmented VLA architecture featuring a dual-level recurrent learnable query mechanism—operating at both frame and chunk levels—to explicitly model short- and long-term memory. The model is further enhanced by a Past Observation Prediction auxiliary task that strengthens visual memory retention. Trained end-to-end without additional inference overhead, the approach implicitly guides decision-making through learned memory representations. Extensive experiments in both simulation and real-world robotic settings demonstrate consistent and significant improvements over memory-free baselines and prior methods such as MemoryVLA across multiple memory dimensions, including spatial, sequential, episodic, temporal, and visual contexts.
📝 Abstract
Vision-language-action (VLA) models for closed-loop robot control are typically cast under the Markov assumption, making them prone to errors on tasks requiring historical context. To incorporate memory, existing VLAs either retrieve from a memory bank, which can be misled by distractors, or extend the frame window, whose fixed horizon still limits long-term retention. In this paper, we introduce ReMem-VLA, a Recurrent Memory VLA model equipped with two sets of learnable queries: frame-level recurrent memory queries for propagating information across consecutive frames to support short-term memory, and chunk-level recurrent memory queries for carrying context across temporal chunks for long-term memory. These queries are trained end-to-end to aggregate and maintain relevant context over time, implicitly guiding the model's decisions without additional training or inference cost. Furthermore, to enhance visual memory, we introduce Past Observation Prediction as an auxiliary training objective. Through extensive memory-centric simulation and real-world robot experiments, we demonstrate that ReMem-VLA exhibits strong memory capabilities across multiple dimensions, including spatial, sequential, episodic, temporal, and visual memory. ReMem-VLA significantly outperforms memory-free VLA baselines $π$0.5 and OpenVLA-OFT and surpasses MemoryVLA on memory-dependent tasks by a large margin.