JaWildText: A Benchmark for Vision-Language Models on Japanese Scene Text Understanding

📅 2026-03-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language model benchmarks inadequately evaluate Japanese scene text understanding, particularly failing to account for linguistic characteristics such as mixed writing systems, vertical text layout, and an extensive character set. To address this gap, this work proposes JaWildText—the first multi-task diagnostic benchmark tailored to real-world Japanese in-the-wild scenes—comprising 3,241 field-collected images across three tasks: dense-text visual question answering, receipt key information extraction, and handwritten OCR. The benchmark supports diverse writing directions, media types, and output formats. Leveraging over one million character-level annotations and a fine-grained evaluation framework, comprehensive assessments of 14 open-source models reveal a best average score of 0.64. Error analysis highlights kanji recognition as a persistent bottleneck, underscoring the benchmark’s effectiveness and necessity in evaluating script-aware capabilities.
📝 Abstract
Japanese scene text poses challenges that multilingual benchmarks often fail to capture, including mixed scripts, frequent vertical writing, and a character inventory far larger than the Latin alphabet. Although Japanese is included in several multilingual benchmarks, these resources do not adequately capture the language-specific complexities. Meanwhile, existing Japanese visual text datasets have primarily focused on scanned documents, leaving in-the-wild scene text underexplored. To fill this gap, we introduce JaWildText, a diagnostic benchmark for evaluating vision-language models (VLMs) on Japanese scene text understanding. JaWildText contains 3,241 instances from 2,961 images newly captured in Japan, with 1.12 million annotated characters spanning 3,643 unique character types. It comprises three complementary tasks that vary in visual organization, output format, and writing style: (i) Dense Scene Text Visual Question Answering (STVQA), which requires reasoning over multiple pieces of visual text evidence; (ii) Receipt Key Information Extraction (KIE), which tests layout-aware structured extraction from mobile-captured receipts; and (iii) Handwriting OCR, which evaluates page-level transcription across various media and writing directions. We evaluate 14 open-weight VLMs and find that the best model achieves an average score of 0.64 across the three tasks. Error analyses show recognition remains the dominant bottleneck, especially for kanji. JaWildText enables fine-grained, script-aware diagnosis of Japanese scene text capabilities, and will be released with evaluation code.
Problem

Research questions and friction points this paper is trying to address.

Japanese scene text
vision-language models
multilingual benchmarks
character recognition
in-the-wild text
Innovation

Methods, ideas, or system contributions that make the work stand out.

Japanese scene text
vision-language models
benchmark
in-the-wild text understanding
kanji recognition
🔎 Similar Papers
No similar papers found.