🤖 AI Summary
This work addresses the challenges of Khmer optical character recognition (OCR), which stem from the script’s complexity and the scarcity of multimodal data encompassing printed, handwritten, and scene text. Existing approaches suffer from limited performance on non-printed modalities and incur substantial memory overhead and routing errors due to separate model deployments. To overcome these limitations, we propose the first unified Khmer text recognition framework that leverages an end-to-end deep learning architecture to jointly handle all modalities without modality-specific modeling. The core innovation is a Modality-Aware Adaptive Feature Selection (MAFS) mechanism, enabling the model to dynamically adjust visual features according to the input modality, thereby significantly enhancing cross-modal robustness—particularly in low-resource settings. We also introduce and publicly release the first comprehensive benchmark dataset for universal Khmer text recognition, achieving state-of-the-art performance across multiple modalities and advancing OCR research for low-resource languages.
📝 Abstract
Khmer is a low-resource language characterized by a complex script, presenting significant challenges for optical character recognition (OCR). While document printed text recognition has advanced because of available datasets, performance on other modalities, such as handwritten and scene text, remains limited by data scarcity. Training modality-specific models for each modality does not allow cross-modality transfer learning, from which modalities with limited data could otherwise benefit. Moreover, deploying many modality-specific models results in significant memory overhead and requires error-prone routing each input image to the appropriate model. On the other hand, simply training on a combined dataset with a non-uniform data distribution across different modalities often leads to degraded performance on underrepresented modalities. To address these, we propose a universal Khmer text recognition (UKTR) framework capable of handling diverse text modalities. Central to our method is a novel modality-aware adaptive feature selection (MAFS) technique designed to adapt visual features according to a particular input image modality and enhance recognition robustness across modalities. Extensive experiments demonstrate that our model achieves state-of-the-art (SoTA) performance. Furthermore, we introduce the first comprehensive benchmark for universal Khmer text recognition, which we release to the community to facilitate future research. Our datasets and models can be accessible via this gated repository\footnote{in review}.