🤖 AI Summary
To address the insufficient robustness of 3D environmental perception in humanoid robots—caused by self-motion and joint occlusions—this paper introduces the first general-purpose multimodal panoptic occupancy perception system tailored for humanoid platforms. Methodologically, we construct HumaNerf, a dedicated panoramic occupancy dataset, and design a standardized multi-sensor configuration integrating RGB-D cameras, IMUs, and joint encoders. Our approach fuses multimodal features with explicit temporal modeling and proposes a lightweight semantic occupancy network capable of real-time generation of high-fidelity, semantically labeled 3D occupancy grids. Key contributions include: (1) the first humanoid-specific occupancy perception benchmark; (2) a hardware-algorithm co-designed robust perception paradigm; and (3) a unified multimodal input interface with a transferable architecture. Experiments demonstrate significant improvements in joint occupancy and semantic prediction accuracy under complex, dynamic scenarios, establishing a reliable environmental representation foundation for navigation and task planning.
📝 Abstract
Humanoid robot technology is advancing rapidly, with manufacturers introducing diverse heterogeneous visual perception modules tailored to specific scenarios. Among various perception paradigms, occupancy-based representation has become widely recognized as particularly suitable for humanoid robots, as it provides both rich semantic and 3D geometric information essential for comprehensive environmental understanding. In this work, we present Humanoid Occupancy, a generalized multimodal occupancy perception system that integrates hardware and software components, data acquisition devices, and a dedicated annotation pipeline. Our framework employs advanced multi-modal fusion techniques to generate grid-based occupancy outputs encoding both occupancy status and semantic labels, thereby enabling holistic environmental understanding for downstream tasks such as task planning and navigation. To address the unique challenges of humanoid robots, we overcome issues such as kinematic interference and occlusion, and establish an effective sensor layout strategy. Furthermore, we have developed the first panoramic occupancy dataset specifically for humanoid robots, offering a valuable benchmark and resource for future research and development in this domain. The network architecture incorporates multi-modal feature fusion and temporal information integration to ensure robust perception. Overall, Humanoid Occupancy delivers effective environmental perception for humanoid robots and establishes a technical foundation for standardizing universal visual modules, paving the way for the widespread deployment of humanoid robots in complex real-world scenarios.