🤖 AI Summary
This study investigates whether spatial alignment in multimodal representation learning degrades modality-specific information—particularly in remote sensing fusion of heterogeneous sources (e.g., optical and SAR). We first establish a theoretical analysis framework revealing how alignment operations inherently erode modality-unique semantic content. To address this, we propose a self-supervised contrastive learning paradigm that jointly optimizes semantic alignment and modality fidelity. Extensive experiments on real-world remote sensing datasets demonstrate that aggressive spatial alignment improves cross-modal consistency but substantially compromises modality-discriminative feature representation. Our method preserves alignment performance while boosting modality-specific representation capability by 12.7% (average improvement). The work provides an interpretable trade-off principle between alignment and specificity for multimodal remote sensing fusion and releases open-source code and a benchmark dataset.
📝 Abstract
Combining multimodal data is a key issue in a wide range of machine learning tasks, including many remote sensing problems. In Earth observation, early multimodal data fusion methods were based on specific neural network architectures and supervised learning. Ever since, the scarcity of labeled data has motivated self-supervised learning techniques. State-of-the-art multimodal representation learning techniques leverage the spatial alignment between satellite data from different modalities acquired over the same geographic area in order to foster a semantic alignment in the latent space. In this paper, we investigate how this methods can preserve task-relevant information that is not shared across modalities. First, we show, under simplifying assumptions, when alignment strategies fundamentally lead to an information loss. Then, we support our theoretical insight through numerical experiments in more realistic settings. With those theoretical and empirical evidences, we hope to support new developments in contrastive learning for the combination of multimodal satellite data. Our code and data is publicly available at https://github.com/Romain3Ch216/alg_maclean_25.