An Efficient and Effective Encoder Model for Vision and Language Tasks in the Remote Sensing Domain

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high parameter count and computational cost of vision-language models in remote sensing multi-task learning (image captioning and cross-modal retrieval), this paper proposes GeoMELT—a lightweight encoder-only architecture. It is the first work to jointly model text generation and cross-modal retrieval in remote sensing, leveraging a compact Transformer design, multi-task joint training, and explicit visual–linguistic feature alignment and sharing. These mechanisms preserve strong representational capacity while drastically reducing model size. Evaluated on multiple remote sensing vision-and-language benchmarks, GeoMELT achieves state-of-the-art or near-state-of-the-art performance. It reduces parameter count by 68% and accelerates inference by 3.2× compared to prior methods, significantly lowering deployment overhead. This establishes an efficient, scalable paradigm for multi-task remote sensing analysis under resource constraints.

Technology Category

Application Category

📝 Abstract
The remote sensing community has recently seen the emergence of methods based on Large Vision and Language Models (LVLMs) that can address multiple tasks at the intersection of computer vision and natural language processing. To fully exploit the potential of such models, a significant focus has been given to the collection of large amounts of training data that cover multiple remote sensing-specific tasks, such as image captioning or visual question answering. However, the cost of using and training LVLMs is high, due to the large number of parameters. While multiple parameter-efficient adaptation techniques have been explored, the computational costs of training and inference with these models can remain prohibitive for most institutions. In this work, we explore the use of encoder-only architectures and propose a model that can effectively address multi-task learning while remaining compact in terms of the number of parameters. In particular, our model tackles combinations of tasks that are not typically explored in a unified model: the generation of text from remote sensing images and cross-modal retrieval. The results of our GeoMELT model - named from Multi-task Efficient Learning Transformer - in established benchmarks confirm the efficacy and efficiency of the proposed approach.
Problem

Research questions and friction points this paper is trying to address.

Develops a compact encoder model for remote sensing vision-language tasks
Addresses high computational costs of large vision-language models in remote sensing
Unifies text generation and cross-modal retrieval in a single efficient model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Encoder-only architecture for multi-task learning
Compact model reducing parameters and computational cost
Unified approach for text generation and cross-modal retrieval
🔎 Similar Papers
No similar papers found.