🤖 AI Summary
This work addresses the scarcity of large-scale, multi-sensor remote sensing image-text datasets with diverse textual annotations, which has hindered the application of vision-language models in Earth observation. We present the first large-scale dataset integrating co-registered Sentinel-1 (SAR) and Sentinel-2 (multispectral) imagery, comprising 464,044 image pairs and 9.6 million multi-granular text annotations—including geographically anchored descriptions, visual question answering prompts, and referring expressions. By leveraging multi-source image fusion and geography-aware text generation, our approach significantly enhances vision-language alignment in remote sensing contexts. Experimental results demonstrate that existing models exhibit limited performance on complex land-cover understanding tasks, whereas fine-tuning on our dataset yields consistent improvements across all evaluated benchmarks. We also release a human-verified benchmark suite to support rigorous evaluation.
📝 Abstract
Vision-langugage models (VLMs) have shown strong performance in computer vision (CV), yet their performance on remote sensing (RS) data remains limited due to the lack of large-scale, multi-sensor RS image-text datasets with diverse textual annotations. Existing datasets predominantly include aerial Red-Green-Blue imagery, with short or weakly grounded captions, and provide limited diversity in annotation types. To address this limitation, we introduce BigEarthNet.txt, a large-scale, multi-sensor image-text dataset designed to advance instruction-driven image-text learning in Earth observation across multiple tasks. BigEarthNet.txt contains 464044 co-registered Sentinel-1 synthetic aperture radar and Sentinel-2 multispectral images with 9.6M text annotations, including: i) geographically anchored captions describing land-use/land-cover (LULC) classes, their spatial relations, and environmental context; ii) visual question answering pairs relevant for different tasks; and iii) referring expression detection instructions for bounding box prediction. Through a comparative statistical analysis, we demonstrate that BigEarthNet.txt surpasses existing RS image-text datasets in textual richness and annotation type variety. We further establish a manually-verified benchmark split to evaluate VLMs in RS and CV. The results show the limitations of these models on tasks that involve complex LULC classes, whereas fine-tuning using BigEarthNet.txt results in consistent performance gains across all considered tasks.