🤖 AI Summary
This work addresses the unreliability of existing Japanese visual question answering (VQA) benchmarks, which suffer from ambiguous questions, incorrect answers, and samples that do not genuinely require visual understanding, thereby compromising the evaluation of vision-language models (VLMs). To remedy this, the authors present the first systematic human curation of seven Japanese VQA datasets, introducing a two-stage annotation pipeline that enforces both linguistic clarity and visual grounding through rigorous consistency checks. The resulting high-quality benchmark, JAMMEval, significantly reduces evaluation variance, enhances model discriminability, and improves assessment accuracy. This effort establishes the first reliable evaluation framework for Japanese VLMs, with all curated data and code publicly released to support future research.
📝 Abstract
Reliable evaluation is essential for the development of vision-language models (VLMs). However, Japanese VQA benchmarks have undergone far less iterative refinement than their English counterparts. As a result, many existing benchmarks contain issues such as ambiguous questions, incorrect answers, and instances that can be solved without visual grounding, undermining evaluation reliability and leading to misleading conclusions in model comparisons. To address these limitations, we introduce JAMMEval, a refined collection of Japanese benchmarks for reliable VLM evaluation. It is constructed by systematically refining seven existing Japanese benchmark datasets through two rounds of human annotation, improving both data quality and evaluation reliability. In our experiments, we evaluate open-weight and proprietary VLMs on JAMMEval and analyze the capabilities of recent models on Japanese VQA. We further demonstrate the effectiveness of our refinement by showing that the resulting benchmarks yield evaluation scores that better reflect model capability, exhibit lower run-to-run variance, and improve the ability to distinguish between models of different capability levels. We release our dataset and code to advance reliable evaluation of VLMs.