Perceiving and Acting in First-Person: A Dataset and Benchmark for Egocentric Human-Object-Human Interactions

πŸ“… 2025-08-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing datasets are largely confined to single interaction categories and lack the capacity to model embodied interactions from a first-person perspective. To address this, we propose InterVLAβ€”the first large-scale, first-person, human-object-human interaction vision-language-action dataset and benchmark framework. Leveraging dual-view RGB videos (first-person plus multi-angle third-person), high-fidelity motion capture, synchronized spoken instructions, and GPT-generated structured interaction scripts, InterVLA comprises 11.4 hours of video (1.2 million frames) with rich multimodal annotations. Methodologically, we introduce a hybrid acquisition system integrated with cross-view motion alignment techniques. This enables three novel tasks: human pose estimation, interactive action generation, and interaction prediction. Experiments demonstrate that InterVLA substantially enhances agent capabilities in perceiving and modeling complex, socially grounded interactions within realistic physical environments.

Technology Category

Application Category

πŸ“ Abstract
Learning action models from real-world human-centric interaction datasets is important towards building general-purpose intelligent assistants with efficiency. However, most existing datasets only offer specialist interaction category and ignore that AI assistants perceive and act based on first-person acquisition. We urge that both the generalist interaction knowledge and egocentric modality are indispensable. In this paper, we embed the manual-assisted task into a vision-language-action framework, where the assistant provides services to the instructor following egocentric vision and commands. With our hybrid RGB-MoCap system, pairs of assistants and instructors engage with multiple objects and the scene following GPT-generated scripts. Under this setting, we accomplish InterVLA, the first large-scale human-object-human interaction dataset with 11.4 hours and 1.2M frames of multimodal data, spanning 2 egocentric and 5 exocentric videos, accurate human/object motions and verbal commands. Furthermore, we establish novel benchmarks on egocentric human motion estimation, interaction synthesis, and interaction prediction with comprehensive analysis. We believe that our InterVLA testbed and the benchmarks will foster future works on building AI agents in the physical world.
Problem

Research questions and friction points this paper is trying to address.

Lack of generalist interaction knowledge in AI datasets
Absence of first-person perception in existing datasets
Need for multimodal human-object-human interaction data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-language-action framework for egocentric interaction
Hybrid RGB-MoCap system captures multimodal data
Large-scale dataset with human-object-human interactions
πŸ”Ž Similar Papers
No similar papers found.
L
Liang Xu
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
C
Chengqun Yang
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
Z
Zili Lin
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
F
Fei Xu
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
Y
Yifan Liu
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
Congsheng Xu
Congsheng Xu
Undergraduate, SEIEE, Shanghai Jiao Tong University
Human Motion Generation Digital Twin Embodied AI
Yiyi Zhang
Yiyi Zhang
Cornell University
Computer VisionGenerative Models
Jie Qin
Jie Qin
Professor, Nanjing University of Aeronautics and Astronautics
Computer VisionMachine LearningPattern Recognition
X
Xingdong Sheng
Lenovo
Yunhui Liu
Yunhui Liu
Nanjing University
Graph Machine Learning
X
Xin Jin
Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo, China
Y
Yichao Yan
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
W
Wenjun Zeng
Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo, China
X
Xiaokang Yang
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University