🤖 AI Summary
Existing vision-based 3D semantic occupancy prediction methods typically adopt modular pipelines, where independently optimized components or fixed-input dependencies lead to cascading errors and semantic inconsistency. To address this, we propose a causality-aware end-to-end learning framework that, for the first time, models the 2D→3D semantic mapping as a differentiable causal process. We introduce a gradient-based causal loss that renders traditionally non-trainable modules—such as depth estimation and voxel projection—end-to-end learnable. Additionally, we design channel-grouped dimensional lifting, learnable camera parameter offsets, and normalized convolutions to strengthen 2D–3D semantic alignment and robustness against perturbations. Evaluated on the Occ3D benchmark, our method achieves state-of-the-art performance, with significant improvements in robustness to camera parameter perturbations and semantic consistency between 2D inputs and 3D outputs.
📝 Abstract
Vision-based 3D semantic occupancy prediction is a critical task in 3D vision that integrates volumetric 3D reconstruction with semantic understanding. Existing methods, however, often rely on modular pipelines. These modules are typically optimized independently or use pre-configured inputs, leading to cascading errors. In this paper, we address this limitation by designing a novel causal loss that enables holistic, end-to-end supervision of the modular 2D-to-3D transformation pipeline. Grounded in the principle of 2D-to-3D semantic causality, this loss regulates the gradient flow from 3D voxel representations back to the 2D features. Consequently, it renders the entire pipeline differentiable, unifying the learning process and making previously non-trainable components fully learnable. Building on this principle, we propose the Semantic Causality-Aware 2D-to-3D Transformation, which comprises three components guided by our causal loss: Channel-Grouped Lifting for adaptive semantic mapping, Learnable Camera Offsets for enhanced robustness against camera perturbations, and Normalized Convolution for effective feature propagation. Extensive experiments demonstrate that our method achieves state-of-the-art performance on the Occ3D benchmark, demonstrating significant robustness to camera perturbations and improved 2D-to-3D semantic consistency.