ThaiOCRBench: A Task-Diverse Benchmark for Vision-Language Understanding in Thai

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language model (VLM) benchmarks severely underrepresent low-resource languages such as Thai—particularly in document structure understanding tasks. To address this gap, we introduce ThaiOCRBench, the first multi-task vision-language benchmark dedicated to Thai, comprising 2,808 samples across 13 document understanding tasks. Our contributions are threefold: (1) We establish the first Thai-specific OCR evaluation framework, enabling zero-shot assessment; (2) Through systematic evaluation, we expose critical weaknesses of mainstream VLMs in handwritten text recognition and fine-grained text extraction, identifying root causes including linguistic bias and structural misalignment; (3) Rigorous human annotation and error analysis demonstrate that closed-source models (e.g., Gemini) outperform open-source counterparts on complex scripts. The benchmark is fully open-sourced, providing a reproducible standard and actionable insights for advancing Thai document intelligence.

Technology Category

Application Category

📝 Abstract
We present ThaiOCRBench, the first comprehensive benchmark for evaluating vision-language models (VLMs) on Thai text-rich visual understanding tasks. Despite recent progress in multimodal modeling, existing benchmarks predominantly focus on high-resource languages, leaving Thai underrepresented, especially in tasks requiring document structure understanding. ThaiOCRBench addresses this gap by offering a diverse, human-annotated dataset comprising 2,808 samples across 13 task categories. We evaluate a wide range of state-of-the-art VLMs in a zero-shot setting, spanning both proprietary and open-source systems. Results show a significant performance gap, with proprietary models (e.g., Gemini 2.5 Pro) outperforming open-source counterparts. Notably, fine-grained text recognition and handwritten content extraction exhibit the steepest performance drops among open-source models. Through detailed error analysis, we identify key challenges such as language bias, structural mismatch, and hallucinated content. ThaiOCRBench provides a standardized framework for assessing VLMs in low-resource, script-complex settings, and provides actionable insights for improving Thai-language document understanding.
Problem

Research questions and friction points this paper is trying to address.

Addressing underrepresentation of Thai language in vision-language benchmarks
Evaluating VLMs on diverse Thai text-rich visual understanding tasks
Identifying performance gaps in low-resource script-complex settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

First Thai text-rich vision-language benchmark
Evaluates 13 task categories with human annotations
Identifies performance gaps in low-resource language settings
🔎 Similar Papers
No similar papers found.
S
Surapon Nonesung
SCB 10X R&D, SCB 10X, SCBX Group, Thailand
T
Teetouch Jaknamon
SCB 10X R&D, SCB 10X, SCBX Group, Thailand
S
Sirinya Chaiophat
SCB 10X R&D, SCB 10X, SCBX Group, Thailand
N
Natapong Nitarach
SCB 10X R&D, SCB 10X, SCBX Group, Thailand
C
Chanakan Wittayasakpan
SCB 10X R&D, SCB 10X, SCBX Group, Thailand
Warit Sirichotedumrong
Warit Sirichotedumrong
SCB 10X Company Limited
Multimedia ProcessingSignal Processing
A
Adisai Na-Thalang
SCB 10X R&D, SCB 10X, SCBX Group, Thailand
Kunat Pipatanakul
Kunat Pipatanakul
SCB 10X
Large language modelLow-resource NLP