🤖 AI Summary
Multimodal active learning faces unique challenges, including missing modalities, varying modality-specific difficulties, and inconsistent interaction structures. Existing approaches lack systematic evaluation and often over-rely on a single modality. This work introduces the first benchmark framework that isolates key challenges through synthetic datasets and systematically evaluates performance on real-world data by integrating multimodal neural networks with diverse query strategies. Experimental results reveal a pervasive imbalance in modality utilization across current methods and demonstrate that multimodal query strategies do not consistently outperform unimodal baselines. These findings underscore the necessity of designing modality-aware active learning mechanisms and provide clear directions for future research in this area.
📝 Abstract
Multimodal learning enables neural networks to integrate information from heterogeneous sources, but active learning in this setting faces distinct challenges. These include missing modalities, differences in modality difficulty, and varying interaction structures. These are issues absent in the unimodal case. While the behavior of active learning strategies in unimodal settings is well characterized, their behavior under such multimodal conditions remains poorly understood. We introduce a new framework for benchmarking multimodal active learning that isolates these pitfalls using synthetic datasets, allowing systematic evaluation without confounding noise. Using this framework, we compare unimodal and multimodal query strategies and validate our findings on two real-world datasets. Our results show that models consistently develop imbalanced representations, relying primarily on one modality while neglecting others. Existing query methods do not mitigate this effect, and multimodal strategies do not consistently outperform unimodal ones. These findings highlight limitations of current active learning methods and underline the need for modality-aware query strategies that explicitly address these pitfalls. Code and benchmark resources will be made publicly available.