🤖 AI Summary
To address the insufficient robustness of semantic segmentation under challenging conditions—such as low illumination, occlusion, and adverse weather—this paper proposes MM SAM-adapter, the first framework to adaptively inject multimodal sensor data (e.g., LiDAR, thermal infrared) into the RGB feature space of the Segment Anything Model (SAM). Leveraging a lightweight adapter network, our method enables dynamic, selective cross-modal fusion while preserving SAM’s original generalization capability and enhancing modality complementarity. Evaluated on three major benchmarks—DeLiVER, FMB, and MUSES—MM SAM-adapter achieves state-of-the-art performance across all datasets. Notably, it significantly outperforms unimodal baselines on the RGB-hard subsets, demonstrating superior robustness and generalization in complex real-world scenarios.
📝 Abstract
Semantic segmentation, a key task in computer vision with broad applications in autonomous driving, medical imaging, and robotics, has advanced substantially with deep learning. Nevertheless, current approaches remain vulnerable to challenging conditions such as poor lighting, occlusions, and adverse weather. To address these limitations, multimodal methods that integrate auxiliary sensor data (e.g., LiDAR, infrared) have recently emerged, providing complementary information that enhances robustness. In this work, we present MM SAM-adapter, a novel framework that extends the capabilities of the Segment Anything Model (SAM) for multimodal semantic segmentation. The proposed method employs an adapter network that injects fused multimodal features into SAM's rich RGB features. This design enables the model to retain the strong generalization ability of RGB features while selectively incorporating auxiliary modalities only when they contribute additional cues. As a result, MM SAM-adapter achieves a balanced and efficient use of multimodal information. We evaluate our approach on three challenging benchmarks, DeLiVER, FMB, and MUSES, where MM SAM-adapter delivers state-of-the-art performance. To further analyze modality contributions, we partition DeLiVER and FMB into RGB-easy and RGB-hard subsets. Results consistently demonstrate that our framework outperforms competing methods in both favorable and adverse conditions, highlighting the effectiveness of multimodal adaptation for robust scene understanding. The code is available at the following link: https://github.com/iacopo97/Multimodal-SAM-Adapter.