🤖 AI Summary
Addressing the challenges posed by the spatial, temporal, and multi-source heterogeneous characteristics of remote sensing data, this paper introduces the first multimodal spatiotemporal foundation model tailored for Earth observation. Methodologically, we propose a remote sensing–specific masking strategy, a spatiotemporal-aware loss function, and a self-supervised learning paradigm; integrate stable latent image modeling with an end-to-end multimodal fusion architecture; and enable seamless integration across data acquisition, annotation, training, and inference. Evaluated on 24 embedding benchmark tasks, our model achieves state-of-the-art (SOTA) performance on 15; on 29 downstream fine-tuning tasks, it ranks first on 19. Overall, it ranks among the top performers across 12 leading foundation models. To foster reproducibility and real-world impact, we fully open-source the code, pre-trained weights, and benchmark datasets—supporting global environmental monitoring and sustainable development applications.
📝 Abstract
Earth observation data presents a unique challenge: it is spatial like images, sequential like video or text, and highly multimodal. We present OlmoEarth: a multimodal, spatio-temporal foundation model that employs a novel self-supervised learning formulation, masking strategy, and loss all designed for the Earth observation domain. OlmoEarth achieves state-of-the-art performance compared to 12 other foundation models across a variety of research benchmarks and real-world tasks from external partners. When evaluating embeddings OlmoEarth achieves the best performance on 15 out of 24 tasks, and with full fine-tuning it is the best on 19 of 29 tasks. We deploy OlmoEarth as the backbone of an end-to-end platform for data collection, labeling, training, and inference of Earth observation models. The OlmoEarth Platform puts frontier foundation models and powerful data management tools into the hands of non-profits and NGOs working to solve the world's biggest problems. OlmoEarth source code, training data, and pre-trained weights are available at $href{https://github.com/allenai/olmoearth_pretrain}{ ext{https://github.com/allenai/olmoearth_pretrain}}$.