π€ AI Summary
Existing datasets are largely confined to single interaction categories and lack the capacity to model embodied interactions from a first-person perspective. To address this, we propose InterVLAβthe first large-scale, first-person, human-object-human interaction vision-language-action dataset and benchmark framework. Leveraging dual-view RGB videos (first-person plus multi-angle third-person), high-fidelity motion capture, synchronized spoken instructions, and GPT-generated structured interaction scripts, InterVLA comprises 11.4 hours of video (1.2 million frames) with rich multimodal annotations. Methodologically, we introduce a hybrid acquisition system integrated with cross-view motion alignment techniques. This enables three novel tasks: human pose estimation, interactive action generation, and interaction prediction. Experiments demonstrate that InterVLA substantially enhances agent capabilities in perceiving and modeling complex, socially grounded interactions within realistic physical environments.
π Abstract
Learning action models from real-world human-centric interaction datasets is important towards building general-purpose intelligent assistants with efficiency. However, most existing datasets only offer specialist interaction category and ignore that AI assistants perceive and act based on first-person acquisition. We urge that both the generalist interaction knowledge and egocentric modality are indispensable. In this paper, we embed the manual-assisted task into a vision-language-action framework, where the assistant provides services to the instructor following egocentric vision and commands. With our hybrid RGB-MoCap system, pairs of assistants and instructors engage with multiple objects and the scene following GPT-generated scripts. Under this setting, we accomplish InterVLA, the first large-scale human-object-human interaction dataset with 11.4 hours and 1.2M frames of multimodal data, spanning 2 egocentric and 5 exocentric videos, accurate human/object motions and verbal commands. Furthermore, we establish novel benchmarks on egocentric human motion estimation, interaction synthesis, and interaction prediction with comprehensive analysis. We believe that our InterVLA testbed and the benchmarks will foster future works on building AI agents in the physical world.