Towards Universal Khmer Text Recognition

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of Khmer optical character recognition (OCR), which stem from the script’s complexity and the scarcity of multimodal data encompassing printed, handwritten, and scene text. Existing approaches suffer from limited performance on non-printed modalities and incur substantial memory overhead and routing errors due to separate model deployments. To overcome these limitations, we propose the first unified Khmer text recognition framework that leverages an end-to-end deep learning architecture to jointly handle all modalities without modality-specific modeling. The core innovation is a Modality-Aware Adaptive Feature Selection (MAFS) mechanism, enabling the model to dynamically adjust visual features according to the input modality, thereby significantly enhancing cross-modal robustness—particularly in low-resource settings. We also introduce and publicly release the first comprehensive benchmark dataset for universal Khmer text recognition, achieving state-of-the-art performance across multiple modalities and advancing OCR research for low-resource languages.

Technology Category

Application Category

📝 Abstract
Khmer is a low-resource language characterized by a complex script, presenting significant challenges for optical character recognition (OCR). While document printed text recognition has advanced because of available datasets, performance on other modalities, such as handwritten and scene text, remains limited by data scarcity. Training modality-specific models for each modality does not allow cross-modality transfer learning, from which modalities with limited data could otherwise benefit. Moreover, deploying many modality-specific models results in significant memory overhead and requires error-prone routing each input image to the appropriate model. On the other hand, simply training on a combined dataset with a non-uniform data distribution across different modalities often leads to degraded performance on underrepresented modalities. To address these, we propose a universal Khmer text recognition (UKTR) framework capable of handling diverse text modalities. Central to our method is a novel modality-aware adaptive feature selection (MAFS) technique designed to adapt visual features according to a particular input image modality and enhance recognition robustness across modalities. Extensive experiments demonstrate that our model achieves state-of-the-art (SoTA) performance. Furthermore, we introduce the first comprehensive benchmark for universal Khmer text recognition, which we release to the community to facilitate future research. Our datasets and models can be accessible via this gated repository\footnote{in review}.
Problem

Research questions and friction points this paper is trying to address.

Khmer text recognition
low-resource language
multimodal OCR
data scarcity
cross-modality transfer
Innovation

Methods, ideas, or system contributions that make the work stand out.

universal text recognition
modality-aware adaptive feature selection
Khmer OCR
cross-modality transfer learning
low-resource language
🔎 Similar Papers
No similar papers found.
M
Marry Kong
Techo Startup Center, Ministry of Economy and Finance, Cambodia
R
Rina Buoy
Techo Startup Center, Ministry of Economy and Finance, Cambodia
S
Sovisal Chenda
Techo Startup Center, Ministry of Economy and Finance, Cambodia
N
Nguonly Taing
Techo Startup Center, Ministry of Economy and Finance, Cambodia
Masakazu Iwamura
Masakazu Iwamura
Osaka Metropolitan University
Koichi Kise
Koichi Kise
Professor of Graduate School of Informatics, Osaka Metropolitan University
Document Image AnalysisComputer VisionHuman Sensing and Actuation