VLURes: Benchmarking VLM Visual and Linguistic Understanding in Low-Resource Languages

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models (VLMs) are predominantly evaluated on English short-text benchmarks, lacking systematic assessment of multilingual capability—especially for low-resource languages—and fine-grained long-text comprehension. Method: We introduce VLURes, the first multilingual vision-language understanding benchmark tailored for low-resource languages, covering English, Japanese, Swahili, and Urdu. It comprises eight fine-grained vision-language tasks plus a novel image-text irrelevance detection task. Data are curated from web sources, rigorously cleaned, and annotated via a hybrid pipeline combining automated evaluation and native-speaker verification; answers and reasoning traces are generated using prompt-engineered VLM inference. Contribution/Results: Evaluation across ten VLMs reveals GPT-4o achieves 90.8% overall accuracy—6.7 percentage points below human performance—while open-source models lag substantially. VLURes establishes critical infrastructure for evaluating multilingual capabilities of multimodal agents.

Technology Category

Application Category

📝 Abstract
Vision Language Models (VLMs) are pivotal for advancing perception in intelligent agents. Yet, evaluation of VLMs remains limited to predominantly English-centric benchmarks in which the image-text pairs comprise short texts. To evaluate VLM fine-grained abilities, in four languages under long-text settings, we introduce a novel multilingual benchmark VLURes featuring eight vision-and-language tasks, and a pioneering unrelatedness task, to probe the fine-grained Visual and Linguistic Understanding capabilities of VLMs across English, Japanese, and low-resource languages, Swahili, and Urdu. Our datasets, curated from web resources in the target language, encompass ten diverse image categories and rich textual context, introducing valuable vision-language resources for Swahili and Urdu. By prompting VLMs to generate responses and rationales, evaluated automatically and by native speakers, we uncover performance disparities across languages and tasks critical to intelligent agents, such as object recognition, scene understanding, and relationship understanding. We conducted evaluations of ten VLMs with VLURes. The best performing model, GPT-4o, achieves an overall accuracy of 90.8% and lags human performance by 6.7%, though the gap is larger for open-source models. The gap highlights VLURes' critical role in developing intelligent agents to tackle multi-modal visual reasoning.
Problem

Research questions and friction points this paper is trying to address.

Evaluating VLMs' multilingual visual-linguistic understanding beyond English
Assessing fine-grained VLM capabilities in low-resource language contexts
Benchmarking VLM performance disparities across diverse linguistic settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual benchmark with eight vision-language tasks
Pioneering unrelatedness task for fine-grained understanding
Native speaker evaluation with automatic assessment methods
🔎 Similar Papers
No similar papers found.