🤖 AI Summary
This work addresses the challenge that first-person human videos often suffer from insufficient visibility of task-critical regions due to viewpoint priors that are difficult to transfer, thereby hindering robot policy learning. To overcome this, the authors propose a unified representation based on 3D optical flow that jointly learns manipulation policies and active visual control without requiring robotic demonstrations. The approach integrates a visibility-aware reward mechanism and leverages a diffusion model to simultaneously predict robot actions, future 3D optical flow, and camera trajectories. During inference, camera trajectories are refined through reward-maximization-guided denoising, enabling geometry-aware active viewpoint control. Experiments demonstrate that the method significantly outperforms existing demonstration-based approaches in real-world scenarios with active viewpoint changes, achieving both robust manipulation and effective maintenance of task-relevant visibility.
📝 Abstract
Egocentric human videos provide a scalable source of manipulation demonstrations; however, deploying them on robots requires active viewpoint control to maintain task-critical visibility, which human viewpoint imitation often fails to provide due to human-specific priors. We propose EgoAVFlow, which learns manipulation and active vision from egocentric videos through a shared 3D flow representation that supports geometric visibility reasoning and transfers without robot demonstrations. EgoAVFlow uses diffusion models to predict robot actions, future 3D flow, and camera trajectories, and refines viewpoints at test time with reward-maximizing denoising under a visibility-aware reward computed from predicted motion and scene geometry. Real-world experiments under actively changing viewpoints show that EgoAVFlow consistently outperforms prior human-demo-based baselines, demonstrating effective visibility maintenance and robust manipulation without robot demonstrations.