🤖 AI Summary
Existing embodied AI benchmarks lack rigorous evaluation of long-term memory, particularly for extended interactive sequences. Method: We introduce the first embodied long-term memory benchmark for prolonged interaction, comprising 60 procedurally generated, scalable tasks in the Habitat simulation environment. These tasks span multi-step navigation, object manipulation, and cross-temporal contextual dependencies, demanding sustained environmental perception and retrieval of historical information. Crucially, our benchmark unifies long-term visual memory modeling with embodied action execution under a single evaluation framework, enabling progressive difficulty scaling. Contribution/Results: It addresses critical gaps in current long-video QA benchmarks—namely, their neglect of low-level motor skills and interaction history modeling. Experiments reveal severe performance degradation in state-of-the-art vision-language models when integrating数百 frames of history, with baseline success rates below 40%. This highlights three core bottlenecks: memory retrieval, efficient compression, and action grounding. Our benchmark provides a reproducible, extensible evaluation paradigm for embodied memory research.
📝 Abstract
Large vision-language models have recently demonstrated impressive performance in planning and control tasks, driving interest in their application to real-world robotics. However, deploying these models for reasoning in embodied contexts is limited by their ability to incorporate long-term experience collected across multiple days and represented by vast collections of images. Current VLMs typically struggle to process more than a few hundred images concurrently, highlighting the need for more efficient mechanisms to handle long-term memory in embodied settings. To effectively evaluate these models for long-horizon control, a benchmark must specifically target scenarios where memory is crucial for success. Existing long-video QA benchmarks overlook embodied challenges like object manipulation and navigation, which demand low-level skills and fine-grained reasoning over past interactions. Moreover, effective memory integration in embodied agents involves both recalling relevant historical information and executing actions based on that information, making it essential to study these aspects together rather than in isolation. In this work, we introduce a new benchmark for long-range embodied tasks in the Habitat simulator. This benchmark evaluates memory-based capabilities across 60 tasks requiring sustained engagement and contextual awareness in an environment. The tasks can also be procedurally extended to longer and more challenging versions, enabling scalable evaluation of memory and reasoning. We also present baselines that integrate state-of-the-art VLMs with low level navigation policies, assessing their performance on these memory-intensive tasks and highlight areas for improvement.