๐ค AI Summary
Existing single-modality out-of-distribution (OOD) detection methods struggle with pixel-level OOD detection and segmentation in safety-critical multimodal scenarios (e.g., autonomous driving, surgical robotics), particularly under unknown distributions across modalities (e.g., image + LiDAR), and suffer from pervasive OOD overconfidence. This work proposes Feature Mixingโan unsupervised, modality-agnostic method that theoretically guarantees synthesis of multimodal anomalous features. We introduce CARLA-OOD, the first multimodal semantic segmentation benchmark containing synthetically generated OOD objects. Our framework jointly leverages multimodal feature fusion, OOD confidence calibration, and cross-modal consistency constraints. Evaluated on SemanticKITTI, nuScenes, CARLA-OOD, and MultiOOD, it achieves state-of-the-art performance while accelerating inference by 10รโ370ร and significantly mitigating OOD overconfidence.
๐ Abstract
Out-of-distribution (OOD) detection and segmentation are crucial for deploying machine learning models in safety-critical applications such as autonomous driving and robot-assisted surgery. While prior research has primarily focused on unimodal image data, real-world applications are inherently multimodal, requiring the integration of multiple modalities for improved OOD detection. A key challenge is the lack of supervision signals from unknown data, leading to overconfident predictions on OOD samples. To address this challenge, we propose Feature Mixing, an extremely simple and fast method for multimodal outlier synthesis with theoretical support, which can be further optimized to help the model better distinguish between in-distribution (ID) and OOD data. Feature Mixing is modality-agnostic and applicable to various modality combinations. Additionally, we introduce CARLA-OOD, a novel multimodal dataset for OOD segmentation, featuring synthetic OOD objects across diverse scenes and weather conditions. Extensive experiments on SemanticKITTI, nuScenes, CARLA-OOD datasets, and the MultiOOD benchmark demonstrate that Feature Mixing achieves state-of-the-art performance with a $10 imes$ to $370 imes$ speedup. Our source code and dataset will be available at https://github.com/mona4399/FeatureMixing.