HTR-JAND: Handwritten Text Recognition with Joint Attention Network and Knowledge Distillation

📅 2024-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address challenges in historical document handwriting recognition—including degraded script quality, poor cross-lingual and diachronic adaptability—this paper proposes an end-to-end framework integrating adaptive feature extraction, joint attention modeling, and curriculum-based knowledge distillation. We innovatively design a FullGatedConv2d+SE backbone for robust feature learning and introduce a hybrid attention mechanism combining Multi-Head Self-Attention with Proxima Attention to enhance modeling of ambiguous and heterogeneous handwriting. Furthermore, we propose a curriculum-based knowledge distillation strategy that achieves model lightweighting without accuracy loss. Experimental results on IAM, RIMES, and Bentham datasets yield character error rates (CER) of 1.23%, 1.02%, and 2.02%, respectively. The distilled student model contains only 0.75M parameters—48% smaller than the teacher—while maintaining state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
Despite significant advances in deep learning, current Handwritten Text Recognition (HTR) systems struggle with the inherent complexity of historical documents, including diverse writing styles, degraded text quality, and computational efficiency requirements across multiple languages and time periods. This paper introduces HTR-JAND (HTR-JAND: Handwritten Text Recognition with Joint Attention Network and Knowledge Distillation), an efficient HTR framework that combines advanced feature extraction with knowledge distillation. Our architecture incorporates three key components: (1) a CNN architecture integrating FullGatedConv2d layers with Squeeze-and-Excitation blocks for adaptive feature extraction, (2) a Combined Attention mechanism fusing Multi-Head Self-Attention with Proxima Attention for robust sequence modeling, and (3) a Knowledge Distillation framework enabling efficient model compression while preserving accuracy through curriculum-based training. The HTR-JAND framework implements a multi-stage training approach combining curriculum learning, synthetic data generation, and multi-task learning for cross-dataset knowledge transfer. We enhance recognition accuracy through context-aware T5 post-processing, particularly effective for historical documents. Comprehensive evaluations demonstrate HTR-JAND's effectiveness, achieving state-of-the-art Character Error Rates (CER) of 1.23%, 1.02%, and 2.02% on IAM, RIMES, and Bentham datasets respectively. Our Student model achieves a 48% parameter reduction (0.75M versus 1.5M parameters) while maintaining competitive performance through efficient knowledge transfer. Source code and pre-trained models are available at href{https://github.com/DocumentRecognitionModels/HTR-JAND}{Github}.
Problem

Research questions and friction points this paper is trying to address.

Handwriting Recognition
Old Documents
Multilingual Support
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feature Extraction
Attention Mechanism
Knowledge Distillation
🔎 Similar Papers
No similar papers found.
M
Mohammed Hamdan
Synchromedia laboratory, École de Technologie Supérieure (ÉTS), University of Quebec, Montreal, Canada
A
Abderrahmane Rahiche
Synchromedia laboratory, École de Technologie Supérieure (ÉTS), University of Quebec, Montreal, Canada
Mohamed Cheriet
Mohamed Cheriet
Full Professor, ÉTS (U. of Quebec), Former Canada Research Chair, SYNCHROMEDIA Lab, CIRODD
Pattern RecognitionMachine LearningImage ProcessingSustainable Intell Cloud & NetworksEnergy