PISA-Bench: The PISA Index as a Multilingual and Multimodal Metric for the Evaluation of Vision-Language Models

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing VLM evaluation benchmarks suffer from scarce high-quality human-annotated samples, overreliance on LLM-synthesized data, and English-centric bias. To address these limitations, we introduce PISA-Bench—the first multilingual, multimodal benchmark derived from the internationally authoritative Programme for International Student Assessment (PISA). It covers six languages (Chinese, English, French, German, Spanish, Japanese), with all image-text pairs meticulously extracted, translated, and verified by domain experts, focusing specifically on geometric and spatial reasoning. Our work innovatively adapts the PISA assessment framework to AI evaluation, establishing a standardized zero-shot evaluation protocol. Extensive experiments reveal substantial performance degradation in mainstream VLMs on non-English languages and smaller-scale models (<20B parameters), particularly in spatial reasoning tasks—where error rates are markedly elevated. These findings systematically expose critical weaknesses in current multimodal models’ cross-lingual generalization and spatial understanding capabilities.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) have demonstrated remarkable progress in multimodal reasoning. However, existing benchmarks remain limited in terms of high-quality, human-verified examples. Many current datasets rely on synthetically generated content by large language models (LLMs). Furthermore, most datasets are limited to English, as manual quality assurance of translated samples is time-consuming and costly. To fill this gap, we introduce PISA-Bench, a multilingual benchmark derived from English examples of the expert-created PISA tests, a unified framework for the assessment of student competencies in over eighty countries. Each example consists of human-extracted instructions, questions, answer options, and images, enriched with question type categories, and has been translated from English into five additional languages (Spanish, German, Chinese, French, and Italian), resulting in a fully parallel corpus covering six languages. We evaluate state-of-the-art vision-language models on PISA-Bench and find that especially small models (<20B parameters) fail to achieve high test scores. We further find substantial performance degradation on non-English splits as well as high error-rates when models are tasked with spatial and geometric reasoning. By releasing the dataset and evaluation framework, we provide a resource for advancing research on multilingual multimodal reasoning.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multilingual multimodal reasoning in vision-language models
Addressing limitations of synthetic and English-only benchmarks
Assessing spatial and geometric reasoning capabilities of VLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual benchmark derived from expert-created PISA tests
Parallel corpus covering six languages with human translations
Evaluation framework for multilingual multimodal reasoning assessment
🔎 Similar Papers
No similar papers found.