SEA-Vision: A Multilingual Benchmark for Comprehensive Document and Scene Text Understanding in Southeast Asia

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks predominantly focus on high-resource languages, limiting their ability to assess model performance on low-resource Southeast Asian languages, complex writing systems, and diverse document types. To address this gap, this work introduces the first multimodal document understanding benchmark covering 11 Southeast Asian languages, supporting both document parsing and Text-Centric Visual Question Answering (TEC-VQA). The benchmark comprises 15,234 hierarchically annotated document pages and 7,496 question-answer pairs. We propose an efficient hybrid annotation pipeline that integrates automated filtering, assistance from multimodal large language models, and lightweight validation by native speakers. Evaluations reveal a significant performance drop among state-of-the-art models on this benchmark, highlighting substantial shortcomings in current multilingual document understanding capabilities.

Technology Category

Application Category

📝 Abstract
Multilingual document and scene text understanding plays an important role in applications such as search, finance, and public services. However, most existing benchmarks focus on high-resource languages and fail to evaluate models in realistic multilingual environments. In Southeast Asia, the diversity of languages, complex writing systems, and highly varied document types make this challenge even greater. We introduce SEA-Vision, a benchmark that jointly evaluates Document Parsing and Text-Centric Visual Question Answering (TEC-VQA) across 11 Southeast Asian languages. SEA-Vision contains 15,234 document parsing pages from nine representative document types, annotated with hierarchical page-, block-, and line-level labels. It also provides 7,496 TEC-VQA question-answer pairs that probe text recognition, numerical calculation, comparative analysis, logical reasoning, and spatial understanding. To make such multilingual, multi-task annotation feasible, we design a hybrid pipeline for Document Parsing and TEC-VQA. It combines automated filtering and scoring with MLLM-assisted labeling and lightweight native-speaker verification, greatly reducing manual labeling while maintaining high quality. We evaluate several leading multimodal models and observe pronounced performance degradation on low-resource Southeast Asian languages, highlighting substantial remaining gaps in multilingual document and scene text understanding. We believe SEA-Vision will help drive global progress in document and scene text understanding.
Problem

Research questions and friction points this paper is trying to address.

multilingual document understanding
scene text understanding
Southeast Asian languages
benchmark evaluation
low-resource languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

multilingual document understanding
TEC-VQA
hybrid annotation pipeline
low-resource languages
Southeast Asian benchmark
🔎 Similar Papers
No similar papers found.