Panoramic Multimodal Semantic Occupancy Prediction for Quadruped Robots

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing semantic occupancy prediction methods, which are primarily designed for wheeled autonomous vehicles and rely solely on RGB inputs, leading to insufficient robustness for legged robots in complex terrains. To overcome this, we propose VoxelHound, a novel framework tailored for quadrupedal robots that integrates spherical panoramic images with multimodal sensor data. The framework introduces Vertical Jitter Compensation (VJC) and Multimodal Information Prompt Fusion (MIPF) mechanisms to enhance spatial consistency and perceptual robustness. Additionally, we present PanoMMOcc, the first panoramic multimodal semantic occupancy dataset specifically curated for quadrupedal robots. Experimental results demonstrate that our approach achieves state-of-the-art performance on the proposed dataset, with a 4.16% improvement in mIoU. We publicly release the dataset, code, and calibration tools to support further research.

Technology Category

Application Category

📝 Abstract
Panoramic imagery provides holistic 360° visual coverage for perception in quadruped robots. However, existing occupancy prediction methods are mainly designed for wheeled autonomous driving and rely heavily on RGB cues, limiting their robustness in complex environments. To bridge this gap, (1) we present PanoMMOcc, the first real-world panoramic multimodal occupancy dataset for quadruped robots, featuring four sensing modalities across diverse scenes. (2) We propose a panoramic multimodal occupancy perception framework, VoxelHound, tailored for legged mobility and spherical imaging. Specifically, we design (i) a Vertical Jitter Compensation (VJC) module to mitigate severe viewpoint perturbations caused by body pitch and roll during mobility, enabling more consistent spatial reasoning, and (ii) an effective Multimodal Information Prompt Fusion (MIPF) module that jointly leverages panoramic visual cues and auxiliary modalities to enhance volumetric occupancy prediction. (3) We establish a benchmark based on PanoMMOcc and provide detailed data analysis to enable systematic evaluation of perception methods under challenging embodied scenarios. Extensive experiments demonstrate that VoxelHound achieves state-of-the-art performance on PanoMMOcc (+4.16%} in mIoU). The dataset and code will be publicly released to facilitate future research on panoramic multimodal 3D perception for embodied robotic systems at https://github.com/SXDR/PanoMMOcc, along with the calibration tools released at https://github.com/losehu/CameraLiDAR-Calib.
Problem

Research questions and friction points this paper is trying to address.

occupancy prediction
quadruped robots
panoramic perception
multimodal sensing
3D perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

panoramic multimodal perception
semantic occupancy prediction
quadruped robots
Vertical Jitter Compensation
Multimodal Information Prompt Fusion
🔎 Similar Papers
No similar papers found.
G
Guoqiang Zhao
School of Artificial Intelligence and Robotics, Hunan University
Z
Zhe Yang
School of Artificial Intelligence and Robotics, Hunan University
S
Sheng Wu
School of Artificial Intelligence and Robotics, Hunan University
Fei Teng
Fei Teng
Reader in Intelligent Energy Systems, Imperial College London
Stability-constrained OptimisationCyber-resilient System OperationData Privacy and Trading
Mengfei Duan
Mengfei Duan
PhD student at Hunan University
Anomaly DetectionOut-of-Distribution DetectionPanoramic Sementic Segmentation
Y
Yuanfan Zheng
School of Artificial Intelligence and Robotics, Hunan University
K
Kai Luo
School of Artificial Intelligence and Robotics, Hunan University
Kailun Yang
Kailun Yang
Professor. School of Artificial Intelligence and Robotics, Hunan University (HNU); KIT; UAH; ZJU
Computer VisionComputational OpticsIntelligent VehiclesAutonomous DrivingRobotics