Embodied VideoAgent: Persistent Memory from Egocentric Videos and Embodied Sensors Enables Dynamic Scene Understanding

📅 2024-12-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of first-person scene understanding and persistent memory modeling for embodied agents operating in dynamic 3D environments. We propose a unified scene memory framework that jointly encodes multimodal embodied sensory signals—including egocentric video, depth, and ego-pose—within a single architecture. Our approach innovatively integrates embodied perception with vision-language models (VLMs) and introduces an action-aware memory update mechanism to enable real-time reasoning about environmental changes and object interactions. The method incorporates an LLM-driven decision-making module, multimodal temporal fusion, and embodied perception alignment. Evaluated on Ego4D-VQ3D, OpenEQA, and EnvQA benchmarks, our framework achieves improvements of +4.9%, +5.8%, and +11.7%, respectively, demonstrating substantial gains in robotic manipulation and embodied interaction generation capabilities.

Technology Category

Application Category

📝 Abstract
This paper investigates the problem of understanding dynamic 3D scenes from egocentric observations, a key challenge in robotics and embodied AI. Unlike prior studies that explored this as long-form video understanding and utilized egocentric video only, we instead propose an LLM-based agent, Embodied VideoAgent, which constructs scene memory from both egocentric video and embodied sensory inputs (e.g. depth and pose sensing). We further introduce a VLM-based approach to automatically update the memory when actions or activities over objects are perceived. Embodied VideoAgent attains significant advantages over counterparts in challenging reasoning and planning tasks in 3D scenes, achieving gains of 4.9% on Ego4D-VQ3D, 5.8% on OpenEQA, and 11.7% on EnvQA. We have also demonstrated its potential in various embodied AI tasks including generating embodied interactions and perception for robot manipulation. The code and demo will be made public.
Problem

Research questions and friction points this paper is trying to address.

Artificial Intelligence
First-person Perspective
3D Environment Understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embodied Video Agent
Visual Language Model
3D World Problem Solving
🔎 Similar Papers
No similar papers found.
Y
Yue Fan
State Key Laboratory of General Artificial Intelligence, BIGAI, Beijing, China
Xiaojian Ma
Xiaojian Ma
University of California, Los Angeles
Computer VisionMachine LearningGenerative ModelingReinforcement Learning
Rongpeng Su
Rongpeng Su
BIGAI
Embodied AI
J
Jun Guo
State Key Laboratory of General Artificial Intelligence, BIGAI, Beijing, China; Tsinghua University
Rujie Wu
Rujie Wu
Peking University
Vision Language ModelsEmbodied Agents
X
Xi Chen
State Key Laboratory of General Artificial Intelligence, BIGAI, Beijing, China
Q
Qing Li
State Key Laboratory of General Artificial Intelligence, BIGAI, Beijing, China