Using Multiple Input Modalities Can Improve Data-Efficiency and O.O.D. Generalization for ML with Satellite Imagery

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Satellite imagery machine learning suffers from low data efficiency and poor out-of-distribution (Geo-OD) generalization across geographic regions. Method: We propose a multimodal modeling paradigm that integrates diverse geospatial data sources—including digital elevation models (DEM), land cover maps, and meteorological/environmental sensor data—into satellite image analysis. Through systematic comparison of hand-crafted fusion strategies (e.g., channel concatenation) versus learned fusion mechanisms (e.g., attention), we identify that simple, interpretable hand-crafted fusion yields superior robustness and performance under few-shot and cross-regional settings. Contribution/Results: Evaluated across multiple SatML benchmark tasks (classification, regression, segmentation), our approach significantly improves data utilization efficiency under label scarcity and enhances generalization to unseen geographic domains. It establishes a new paradigm for lightweight, reliable, and deployable remote sensing AI systems.

Technology Category

Application Category

📝 Abstract
A large variety of geospatial data layers is available around the world ranging from remotely-sensed raster data like satellite imagery, digital elevation models, predicted land cover maps, and human-annotated data, to data derived from environmental sensors such as air temperature or wind speed data. A large majority of machine learning models trained on satellite imagery (SatML), however, are designed primarily for optical input modalities such as multi-spectral satellite imagery. To better understand the value of using other input modalities alongside optical imagery in supervised learning settings, we generate augmented versions of SatML benchmark tasks by appending additional geographic data layers to datasets spanning classification, regression, and segmentation. Using these augmented datasets, we find that fusing additional geographic inputs with optical imagery can significantly improve SatML model performance. Benefits are largest in settings where labeled data are limited and in geographic out-of-sample settings, suggesting that multi-modal inputs may be especially valuable for data-efficiency and out-of-sample performance of SatML models. Surprisingly, we find that hard-coded fusion strategies outperform learned variants, with interesting implications for future work.
Problem

Research questions and friction points this paper is trying to address.

Enhancing satellite ML models with multi-modal geospatial data inputs
Improving data-efficiency in limited labeled data scenarios
Boosting out-of-distribution generalization for satellite imagery tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fusing multiple geospatial data layers
Improving data-efficiency with multi-modal inputs
Hard-coded fusion outperforms learned strategies
🔎 Similar Papers
No similar papers found.
A
Arjun Rao
Department of Computer Science, University of Colorado Boulder
Esther Rolf
Esther Rolf
Assistant Professor, CU Boulder
machine learning