🤖 AI Summary
To address insufficient joint modeling of semantic segmentation and stereo matching in autonomous driving, this paper proposes an end-to-end tightly coupled multi-task learning framework. Methodologically, it introduces: (1) a gated tight-coupling feature fusion module enabling bidirectional, selective cross-task feature interaction; (2) a hierarchical depth supervision mechanism to enforce geometric–semantic consistency in intermediate representations; and (3) a coupled reinforcement loss function jointly optimizing segmentation boundary fidelity and disparity continuity constraints. The framework employs a shared backbone network to significantly enhance collaborative representation learning for both tasks. Evaluated on KITTI and vKITTI2 benchmarks, it achieves over 9% improvement in semantic segmentation mIoU while substantially reducing stereo matching end-point error (EPE). Notably, it is the first work to simultaneously achieve state-of-the-art performance on both tasks.
📝 Abstract
Semantic segmentation and stereo matching, respectively analogous to the ventral and dorsal streams in our human brain, are two key components of autonomous driving perception systems. Addressing these two tasks with separate networks is no longer the mainstream direction in developing computer vision algorithms, particularly with the recent advances in large vision models and embodied artificial intelligence. The trend is shifting towards combining them within a joint learning framework, especially emphasizing feature sharing between the two tasks. The major contributions of this study lie in comprehensively tightening the coupling between semantic segmentation and stereo matching. Specifically, this study introduces three novelties: (1) a tightly coupled, gated feature fusion strategy, (2) a hierarchical deep supervision strategy, and (3) a coupling tightening loss function. The combined use of these technical contributions results in TiCoSS, a state-of-the-art joint learning framework that simultaneously tackles semantic segmentation and stereo matching. Through extensive experiments on the KITTI and vKITTI2 datasets, along with qualitative and quantitative analyses, we validate the effectiveness of our developed strategies and loss function, and demonstrate its superior performance compared to prior arts, with a notable increase in mIoU by over 9%. Our source code will be publicly available at mias.group/TiCoSS upon publication.