🤖 AI Summary
This study investigates the capabilities and limitations of vision-language models (VLMs) in low-resource language settings, specifically for authentic multimodal educational tasks. To this end, we introduce ViExam—the first Vietnamese multimodal examination benchmark—comprising 2,548 real-world questions across seven academic disciplines. Using ViExam, we systematically evaluate cross-lingual multimodal reasoning performance of state-of-the-art VLMs, revealing an average accuracy of 57.74%, substantially below the human average (66.54%); only the o3 model achieves 74.07%, still markedly inferior to the human ceiling (99.60%). We further propose a human-AI collaboration paradigm that combines Vietnamese test items with English instruction prompting, yielding a ~5-percentage-point accuracy improvement. This work provides the first standardized multimodal evaluation framework for non-English educational contexts, empirically exposing critical bottlenecks of current VLMs in low-resource language education and validating the efficacy of collaborative human-AI inference.
📝 Abstract
Vision language models (VLMs) demonstrate remarkable capabilities on English multimodal tasks, but their performance on low-resource languages with genuinely multimodal educational content remains largely unexplored. In this work, we test how VLMs perform on Vietnamese educational assessments, investigating whether VLMs trained predominantly on English data can handle real-world cross-lingual multimodal reasoning. Our work presents the first comprehensive evaluation of VLM capabilities on multimodal Vietnamese exams through proposing ViExam, a benchmark containing 2,548 multimodal questions. We find that state-of-the-art VLMs achieve only 57.74% while open-source models achieve 27.70% mean accuracy across 7 academic domains, including Mathematics, Physics, Chemistry, Biology, Geography, Driving Test, and IQ Test. Most VLMs underperform average human test-takers (66.54%), with only the thinking VLM o3 (74.07%) exceeding human average performance, yet still falling substantially short of human best performance (99.60%). Cross-lingual prompting with English instructions while maintaining Vietnamese content fails to improve performance, decreasing accuracy by 1 percentage point for SOTA VLMs. Human-in-the-loop collaboration can partially improve VLM performance by 5 percentage points. Code and data are available at: https://vi-exam.github.io.