🤖 AI Summary
This work proposes a modular system for humanoid table tennis that operates solely on onboard first-person vision, eliminating reliance on external perception. By integrating low-latency visual perception, a generative action prior, and a scalable whole-body motion control framework, the approach achieves continuous and precise ball returns without external cameras for the first time. The system enables diverse striking maneuvers—including powerful smashes and low crouching shots—and demonstrates robust upper-lower body coordination during high-speed rallies in real-world settings. This significantly enhances the autonomy and dynamic motor capabilities of humanoids in fast-paced, interactive tasks.
📝 Abstract
Existing humanoid table tennis systems remain limited by their reliance on external sensing and their inability to achieve agile whole-body coordination for precise task execution. These limitations stem from two core challenges: achieving low-latency and robust onboard egocentric perception under fast robot motion, and obtaining sufficiently diverse task-aligned strike motions for learning precise yet natural whole-body behaviors. In this work, we present \methodname, a modular system for agile humanoid table tennis that unifies scalable whole-body skill learning with onboard egocentric perception, eliminating the need for external cameras during deployment. Our work advances prior humanoid table-tennis systems in three key aspects. First, we achieve agile and precise ball interaction with tightly coordinated whole-body control, rather than relying on decoupled upper- and lower-body behaviors. This enables the system to exhibit diverse strike motions, including explosive whole-body smashes and low crouching shots. Second, by augmenting and diversifying strike motions with a generative model, our framework benefits from scalable motion priors and produces natural, robust striking behaviors across a wide workspace. Third, to the best of our knowledge, we demonstrate the first humanoid table-tennis system capable of consecutive strikes using onboard sensing alone, despite the challenges of low-latency perception, ego-motion-induced instability, and limited field of view. Extensive real-world experiments demonstrate stable and precise ball exchanges under high-speed conditions, validating scalable, perception-driven whole-body skill learning for dynamic humanoid interaction tasks.