🤖 AI Summary
This work addresses the challenge of first-person scene understanding and persistent memory modeling for embodied agents operating in dynamic 3D environments. We propose a unified scene memory framework that jointly encodes multimodal embodied sensory signals—including egocentric video, depth, and ego-pose—within a single architecture. Our approach innovatively integrates embodied perception with vision-language models (VLMs) and introduces an action-aware memory update mechanism to enable real-time reasoning about environmental changes and object interactions. The method incorporates an LLM-driven decision-making module, multimodal temporal fusion, and embodied perception alignment. Evaluated on Ego4D-VQ3D, OpenEQA, and EnvQA benchmarks, our framework achieves improvements of +4.9%, +5.8%, and +11.7%, respectively, demonstrating substantial gains in robotic manipulation and embodied interaction generation capabilities.
📝 Abstract
This paper investigates the problem of understanding dynamic 3D scenes from egocentric observations, a key challenge in robotics and embodied AI. Unlike prior studies that explored this as long-form video understanding and utilized egocentric video only, we instead propose an LLM-based agent, Embodied VideoAgent, which constructs scene memory from both egocentric video and embodied sensory inputs (e.g. depth and pose sensing). We further introduce a VLM-based approach to automatically update the memory when actions or activities over objects are perceived. Embodied VideoAgent attains significant advantages over counterparts in challenging reasoning and planning tasks in 3D scenes, achieving gains of 4.9% on Ego4D-VQ3D, 5.8% on OpenEQA, and 11.7% on EnvQA. We have also demonstrated its potential in various embodied AI tasks including generating embodied interactions and perception for robot manipulation. The code and demo will be made public.