Benchmarking Multi-modal Semantic Segmentation under Sensor Failures: Missing and Noisy Modality Robustness

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Robustness evaluation of multimodal semantic segmentation (MMSS) under sensor failures—such as complete modality absence, random modality dropout, or modality-specific noise—lacks standardized benchmarks. Method: We introduce the first standardized benchmark dedicated to MMSS robustness, systematically covering three failure categories: full modality missing, random modality dropout, and modality noise. We propose a probabilistic failure classification framework based on uniform distribution and independent Bernoulli sampling to model modality failures. Contribution/Results: We define four novel robustness metrics (e.g., mIoU^{Avg}_{EMM}) to fill critical evaluation gaps; design a unified fusion analysis and quantitative assessment pipeline; and publicly release an open-source benchmark toolkit. This work significantly enhances the comparability and reliability of MMSS models under realistic sensor degradation scenarios.

Technology Category

Application Category

📝 Abstract
Multi-modal semantic segmentation (MMSS) addresses the limitations of single-modality data by integrating complementary information across modalities. Despite notable progress, a significant gap persists between research and real-world deployment due to variability and uncertainty in multi-modal data quality. Robustness has thus become essential for practical MMSS applications. However, the absence of standardized benchmarks for evaluating robustness hinders further advancement. To address this, we first survey existing MMSS literature and categorize representative methods to provide a structured overview. We then introduce a robustness benchmark that evaluates MMSS models under three scenarios: Entire-Missing Modality (EMM), Random-Missing Modality (RMM), and Noisy Modality (NM). From a probabilistic standpoint, we model modality failure under two conditions: (1) all damaged combinations are equally probable; (2) each modality fails independently following a Bernoulli distribution. Based on these, we propose four metrics-$mIoU^{Avg}_{EMM}$, $mIoU^{E}_{EMM}$, $mIoU^{Avg}_{RMM}$, and $mIoU^{E}_{RMM}$-to assess model robustness under EMM and RMM. This work provides the first dedicated benchmark for MMSS robustness, offering new insights and tools to advance the field. Source code is available at https://github.com/Chenfei-Liao/Multi-Modal-Semantic-Segmentation-Robustness-Benchmark.
Problem

Research questions and friction points this paper is trying to address.

Evaluates MMSS robustness under missing and noisy modalities
Addresses lack of standardized benchmarks for MMSS robustness
Proposes metrics to assess model performance in failure scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces benchmark for multi-modal segmentation robustness
Models modality failure with probabilistic conditions
Proposes four metrics to assess model robustness
🔎 Similar Papers
No similar papers found.