DUNIA: Pixel-Sized Embeddings via Cross-Modal Alignment for Earth Observation Applications

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing self-supervised multimodal methods in Earth observation produce coarse-grained (patch-level) embeddings, hindering precise alignment with pixel-level LiDAR data and limiting cross-modal fusion and fine-grained applications. This paper introduces the first pixel-level cross-modal alignment framework for remote sensing imagery and full-waveform LiDAR, leveraging contrastive learning to align image-LiDAR representations at the pixel level, thereby constructing a unified embedding space that supports zero-shot transfer and few-shot fine-tuning. Key contributions include: (1) the first pixel-level image-LiDAR alignment paradigm; (2) end-to-end modeling of full-waveform LiDAR; and (3) a zero-shot classifier adaptation mechanism. Experiments demonstrate that our method achieves superior zero-shot performance over task-specific supervised models across seven environmental monitoring tasks; under few-shot fine-tuning, it matches or surpasses state-of-the-art methods on five of six evaluated tasks.

Technology Category

Application Category

📝 Abstract
Significant efforts have been directed towards adapting self-supervised multimodal learning for Earth observation applications. However, existing methods produce coarse patch-sized embeddings, limiting their effectiveness and integration with other modalities like LiDAR. To close this gap, we present DUNIA, an approach to learn pixel-sized embeddings through cross-modal alignment between images and full-waveform LiDAR data. As the model is trained in a contrastive manner, the embeddings can be directly leveraged in the context of a variety of environmental monitoring tasks in a zero-shot setting. In our experiments, we demonstrate the effectiveness of the embeddings for seven such tasks (canopy height mapping, fractional canopy cover, land cover mapping, tree species identification, plant area index, crop type classification, and per-pixel waveform-based vertical structure mapping). The results show that the embeddings, along with zero-shot classifiers, often outperform specialized supervised models, even in low data regimes. In the fine-tuning setting, we show strong low-shot capabilities with performances near or better than state-of-the-art on five out of six tasks.
Problem

Research questions and friction points this paper is trying to address.

Learn pixel-sized embeddings
Cross-modal alignment for Earth observation
Enhance environmental monitoring tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pixel-sized embeddings via cross-modal alignment
Contrastive training for zero-shot tasks
Strong low-shot capabilities in fine-tuning
🔎 Similar Papers
No similar papers found.
I
Ibrahim Fayad
Laboratoire des Sciences du Climat et de l’Environnement, LSCE/IPSL, France; Kayrros SAS, Paris 75009, France
Max Zimmer
Max Zimmer
Zuse Institute Berlin
Deep LearningOptimizationMathematics
M
Martin Schwartz
Laboratoire des Sciences du Climat et de l’Environnement, LSCE/IPSL, France
P
Philippe Ciais
Laboratoire des Sciences du Climat et de l’Environnement, LSCE/IPSL, France
Fabian Gieseke
Fabian Gieseke
Department of Information Systems, University of Münster
Data EngineeringMaschine Learning
G
Gabriel Belouze
Laboratoire des Sciences du Climat et de l’Environnement, LSCE/IPSL, France
S
Sarah Brood
Department of Computer Science, CNRS, INRIA & École Normale Supérieure, Paris 75230, France
A
A. D. Truchis
Kayrros SAS, Paris 75009, France
Alexandre d'Aspremont
Alexandre d'Aspremont
CNRS & Ecole Normale Supérieure, Paris
Optimizationmachine learningstatistics.