🤖 AI Summary
This work addresses the limitations of existing semantic occupancy prediction methods, which are primarily designed for wheeled autonomous vehicles and rely solely on RGB inputs, leading to insufficient robustness for legged robots in complex terrains. To overcome this, we propose VoxelHound, a novel framework tailored for quadrupedal robots that integrates spherical panoramic images with multimodal sensor data. The framework introduces Vertical Jitter Compensation (VJC) and Multimodal Information Prompt Fusion (MIPF) mechanisms to enhance spatial consistency and perceptual robustness. Additionally, we present PanoMMOcc, the first panoramic multimodal semantic occupancy dataset specifically curated for quadrupedal robots. Experimental results demonstrate that our approach achieves state-of-the-art performance on the proposed dataset, with a 4.16% improvement in mIoU. We publicly release the dataset, code, and calibration tools to support further research.
📝 Abstract
Panoramic imagery provides holistic 360° visual coverage for perception in quadruped robots. However, existing occupancy prediction methods are mainly designed for wheeled autonomous driving and rely heavily on RGB cues, limiting their robustness in complex environments. To bridge this gap, (1) we present PanoMMOcc, the first real-world panoramic multimodal occupancy dataset for quadruped robots, featuring four sensing modalities across diverse scenes. (2) We propose a panoramic multimodal occupancy perception framework, VoxelHound, tailored for legged mobility and spherical imaging. Specifically, we design (i) a Vertical Jitter Compensation (VJC) module to mitigate severe viewpoint perturbations caused by body pitch and roll during mobility, enabling more consistent spatial reasoning, and (ii) an effective Multimodal Information Prompt Fusion (MIPF) module that jointly leverages panoramic visual cues and auxiliary modalities to enhance volumetric occupancy prediction. (3) We establish a benchmark based on PanoMMOcc and provide detailed data analysis to enable systematic evaluation of perception methods under challenging embodied scenarios. Extensive experiments demonstrate that VoxelHound achieves state-of-the-art performance on PanoMMOcc (+4.16%} in mIoU). The dataset and code will be publicly released to facilitate future research on panoramic multimodal 3D perception for embodied robotic systems at https://github.com/SXDR/PanoMMOcc, along with the calibration tools released at https://github.com/losehu/CameraLiDAR-Calib.