🤖 AI Summary
Remote sensing image captioning has long suffered from reliance on English-language annotations and poor cross-lingual generalization. This paper introduces the first training-free multilingual remote sensing image captioning framework, integrating a domain-adapted SigLIP2 visual encoder, retrieval-augmented prompting (RAG), and multilingual large language models (LLMs) or vision-language models (VLMs) to enable zero-shot caption generation across ten languages. We propose a novel PageRank-based multimodal graph re-ranking strategy over image–text pairs to significantly enhance retrieval coherence. Experiments demonstrate that direct multilingual generation outperforms post-hoc translation. Evaluated on four benchmark datasets, our method achieves performance comparable to fully supervised English-only systems, with key metrics improving by up to 35%. These results validate the feasibility and effectiveness of zero-shot multilingual remote sensing image captioning.
📝 Abstract
Remote sensing image captioning has advanced rapidly through encoder--decoder models, although the reliance on large annotated datasets and the focus on English restricts global applicability. To address these limitations, we propose the first training-free multilingual approach, based on retrieval-augmented prompting. For a given aerial image, we employ a domain-adapted SigLIP2 encoder to retrieve related captions and few-shot examples from a datastore, which are then provided to a language model. We explore two variants: an image-blind setup, where a multilingual Large Language Model (LLM) generates the caption from textual prompts alone, and an image-aware setup, where a Vision--Language Model (VLM) jointly processes the prompt and the input image. To improve the coherence of the retrieved content, we introduce a graph-based re-ranking strategy using PageRank on a graph of images and captions. Experiments on four benchmark datasets across ten languages demonstrate that our approach is competitive with fully supervised English-only systems and generalizes to other languages. Results also highlight the importance of re-ranking with PageRank, yielding up to 35% improvements in performance metrics. Additionally, it was observed that while VLMs tend to generate visually grounded but lexically diverse captions, LLMs can achieve stronger BLEU and CIDEr scores. Lastly, directly generating captions in the target language consistently outperforms other translation-based strategies. Overall, our work delivers one of the first systematic evaluations of multilingual, training-free captioning for remote sensing imagery, advancing toward more inclusive and scalable multimodal Earth observation systems.