🤖 AI Summary
This work addresses the challenges of optical character recognition (OCR) for minority languages, which include complex writing systems, scarce annotated data, and significant variation between historical and modern scripts—factors that severely limit the generalization of existing methods in low-resource or zero-shot settings. To overcome these limitations, the authors propose OmniOCR, a novel framework that integrates dynamic low-rank adaptation (Dynamic LoRA) with sparse regularization. This approach dynamically allocates model capacity across layers and scripts while pruning redundant parameter updates, enabling efficient, compact, and zero-shot-friendly parameter-efficient fine-tuning. Evaluated on four datasets—TibetanMNIST, Shuishu, Ancient Yi, and Dongba scripts—OmniOCR outperforms state-of-the-art methods by 39% to 66% in accuracy, achieving substantial gains without incurring additional inference overhead.
📝 Abstract
Optical character recognition (OCR) has advanced rapidly with deep learning and multimodal models, yet most methods focus on well-resourced scripts such as Latin and Chinese. Ethnic minority languages remain underexplored due to complex writing systems, scarce annotations, and diverse historical and modern forms, making generalization in low-resource or zero-shot settings challenging. To address these challenges, we present OmniOCR, a universal framework for ethnic minority scripts. OmniOCR introduces Dynamic Low-Rank Adaptation (Dynamic LoRA) to allocate model capacity across layers and scripts, enabling effective adaptation while preserving knowledge.A sparsity regularization prunes redundant updates, ensuring compact and efficient adaptation without extra inference cost. Evaluations on TibetanMNIST, Shui, ancient Yi, and Dongba show that OmniOCR outperforms zero-shot foundation models and standard post training, achieving state-of-the-art accuracy with superior parameter efficiency, and compared with the state-of-the-art baseline models, it improves accuracy by 39%-66% on these four datasets. Code: https://github.com/AIGeeksGroup/OmniOCR.