🤖 AI Summary
This work addresses cross-lingual prediction of semantic relationships between images and text in multilingual tweets. To overcome the lack of low-resource language benchmarks in prior research, we introduce the first multilingual vision–language relation classification task and construct Latvian-English TweetVLM—the first high-quality, human-annotated, linguistically aligned bilingual Twitter benchmark. Leveraging this benchmark, we systematically evaluate the cross-lingual generalization capabilities of multilingual vision–language models (e.g., X-VLM, FLAVA), incorporating cross-lingual transfer and balanced sampling strategies. Experimental results reveal that state-of-the-art models achieve strong performance on English but suffer an average accuracy drop of 12.3% on Latvian, exposing a critical bottleneck in low-resource language understanding. Our contributions include: (1) a novel multilingual multimodal task; (2) the first curated bilingual benchmark for cross-lingual vision–language evaluation; and (3) empirical insights into the limitations of current models in low-resource settings—advancing research in cross-lingual multimodal understanding.
📝 Abstract
Various social networks have been allowing media uploads for over a decade now. Still, it has not always been clear what is their relation with the posted text or even if there is any at all. In this work, we explore how multilingual vision-language models tackle the task of image-text relation prediction in different languages, and construct a dedicated balanced benchmark data set from Twitter posts in Latvian along with their manual translations into English. We compare our results to previous work and show that the more recently released vision-language model checkpoints are becoming increasingly capable at this task, but there is still much room for further improvement.