REOBench: Benchmarking Robustness of Earth Observation Foundation Models

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Robustness of Earth observation foundation models under realistic corruptions remains systematically unassessed. Method: We introduce REOBench—the first dedicated robustness benchmark for Earth observation—covering six representative tasks and twelve realistic degradations derived from high-resolution optical remote sensing imagery, including both appearance- and geometry-related perturbations. We propose a fine-grained, task-adaptive evaluation protocol and integrate mask image modeling, contrastive learning, and vision-language pretraining, augmented with a remote sensing–specific multi-dimensional degradation synthesis strategy. Contribution/Results: Experiments reveal strong task–architecture–perturbation dependencies in model robustness, with performance drops ranging from 1% to over 20%. Notably, multimodal vision-language models demonstrate superior robustness in cross-modal tasks. REOBench establishes a reproducible benchmark and provides concrete, actionable insights for advancing robust Earth observation foundation models.

Technology Category

Application Category

📝 Abstract
Earth observation foundation models have shown strong generalization across multiple Earth observation tasks, but their robustness under real-world perturbations remains underexplored. To bridge this gap, we introduce REOBench, the first comprehensive benchmark for evaluating the robustness of Earth observation foundation models across six tasks and twelve types of image corruptions, including both appearance-based and geometric perturbations. To ensure realistic and fine-grained evaluation, our benchmark focuses on high-resolution optical remote sensing images, which are widely used in critical applications such as urban planning and disaster response. We conduct a systematic evaluation of a broad range of models trained using masked image modeling, contrastive learning, and vision-language pre-training paradigms. Our results reveal that (1) existing Earth observation foundation models experience significant performance degradation when exposed to input corruptions. (2) The severity of degradation varies across tasks, model architectures, backbone sizes, and types of corruption, with performance drop varying from less than 1% to over 20%. (3) Vision-language models show enhanced robustness, particularly in multimodal tasks. REOBench underscores the vulnerability of current Earth observation foundation models to real-world corruptions and provides actionable insights for developing more robust and reliable models.
Problem

Research questions and friction points this paper is trying to address.

Assessing robustness of Earth observation models to real-world image corruptions
Evaluating performance degradation across tasks, architectures, and corruption types
Identifying vision-language models as more robust in multimodal tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

First benchmark for Earth observation model robustness
Evaluates six tasks with twelve image corruptions
Focuses on high-resolution optical remote sensing images
🔎 Similar Papers
No similar papers found.