GeoSANE: Learning Geospatial Representations from Models, Not Data

πŸ“… 2026-03-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing foundational models in remote sensing struggle to collaborate effectively due to heterogeneous modalities and tasks. This work proposes GeoSANEβ€”a geospatial model foundry that, for the first time, learns a unified neural representation directly from pretrained model weights rather than raw data. By leveraging weight fusion and on-demand generation mechanisms, GeoSANE dynamically constructs new networks tailored for classification, segmentation, and detection tasks. The approach enables cross-model and cross-task knowledge integration, and extensive evaluation across ten diverse remote sensing datasets and GEO-Bench demonstrates that the generated models outperform training from scratch, match or exceed current state-of-the-art performance, and significantly surpass pruning and knowledge distillation methods in producing lightweight models.

Technology Category

Application Category

πŸ“ Abstract
Recent advances in remote sensing have led to an increase in the number of available foundation models; each trained on different modalities, datasets, and objectives, yet capturing only part of the vast geospatial knowledge landscape. While these models show strong results within their respective domains, their capabilities remain complementary rather than unified. Therefore, instead of choosing one model over another, we aim to combine their strengths into a single shared representation. We introduce GeoSANE, a geospatial model foundry that learns a unified neural representation from the weights of existing foundation models and task-specific models, able to generate novel neural networks weights on-demand. Given a target architecture, GeoSANE generates weights ready for finetuning for classification, segmentation, and detection tasks across multiple modalities. Models generated by GeoSANE consistently outperform their counterparts trained from scratch, match or surpass state-of-the-art remote sensing foundation models, and outperform models obtained through pruning or knowledge distillation when generating lightweight networks. Evaluations across ten diverse datasets and on GEO-Bench confirm its strong generalization capabilities. By shifting from pre-training to weight generation, GeoSANE introduces a new framework for unifying and transferring geospatial knowledge across models and tasks. Code is available at \href{https://hsg-aiml.github.io/GeoSANE/}{hsg-aiml.github.io/GeoSANE/}.
Problem

Research questions and friction points this paper is trying to address.

geospatial representation
foundation models
model unification
remote sensing
neural representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

GeoSANE
foundation model fusion
weight generation
geospatial representation learning
model foundry
πŸ”Ž Similar Papers
No similar papers found.