🤖 AI Summary
Existing VLM evaluation benchmarks suffer from scarce high-quality human-annotated samples, overreliance on LLM-synthesized data, and English-centric bias. To address these limitations, we introduce PISA-Bench—the first multilingual, multimodal benchmark derived from the internationally authoritative Programme for International Student Assessment (PISA). It covers six languages (Chinese, English, French, German, Spanish, Japanese), with all image-text pairs meticulously extracted, translated, and verified by domain experts, focusing specifically on geometric and spatial reasoning. Our work innovatively adapts the PISA assessment framework to AI evaluation, establishing a standardized zero-shot evaluation protocol. Extensive experiments reveal substantial performance degradation in mainstream VLMs on non-English languages and smaller-scale models (<20B parameters), particularly in spatial reasoning tasks—where error rates are markedly elevated. These findings systematically expose critical weaknesses in current multimodal models’ cross-lingual generalization and spatial understanding capabilities.
📝 Abstract
Vision-language models (VLMs) have demonstrated remarkable progress in multimodal reasoning. However, existing benchmarks remain limited in terms of high-quality, human-verified examples. Many current datasets rely on synthetically generated content by large language models (LLMs). Furthermore, most datasets are limited to English, as manual quality assurance of translated samples is time-consuming and costly. To fill this gap, we introduce PISA-Bench, a multilingual benchmark derived from English examples of the expert-created PISA tests, a unified framework for the assessment of student competencies in over eighty countries. Each example consists of human-extracted instructions, questions, answer options, and images, enriched with question type categories, and has been translated from English into five additional languages (Spanish, German, Chinese, French, and Italian), resulting in a fully parallel corpus covering six languages. We evaluate state-of-the-art vision-language models on PISA-Bench and find that especially small models (<20B parameters) fail to achieve high test scores. We further find substantial performance degradation on non-English splits as well as high error-rates when models are tasked with spatial and geometric reasoning. By releasing the dataset and evaluation framework, we provide a resource for advancing research on multilingual multimodal reasoning.