Learning to Align: Addressing Character Frequency Distribution Shifts in Handwritten Text Recognition

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In handwritten text recognition, historical and geographical variations induce shifts in character frequency distributions, degrading the generalization of generic models on domain-specific subsets. To address this, we propose a Wasserstein-distance-based distribution alignment framework: (1) we explicitly formulate character frequency distribution alignment as a differentiable loss term for the first time; (2) we design a distribution-aware training objective and enable inference-time, distribution-guided beam search decoding—without requiring model retraining. Our method thus jointly optimizes training performance and supports plug-and-play deployment. Extensive experiments across multiple handwritten datasets and diverse recognition architectures demonstrate consistent and substantial improvements in overall accuracy and robustness—particularly on distribution-shifted subsets. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Handwritten text recognition aims to convert visual input into machine-readable text, and it remains challenging due to the evolving and context-dependent nature of handwriting. Character sets change over time, and character frequency distributions shift across historical periods or regions, often causing models trained on broad, heterogeneous corpora to underperform on specific subsets. To tackle this, we propose a novel loss function that incorporates the Wasserstein distance between the character frequency distribution of the predicted text and a target distribution empirically derived from training data. By penalizing divergence from expected distributions, our approach enhances both accuracy and robustness under temporal and contextual intra-dataset shifts. Furthermore, we demonstrate that character distribution alignment can also improve existing models at inference time without requiring retraining by integrating it as a scoring function in a guided decoding scheme. Experimental results across multiple datasets and architectures confirm the effectiveness of our method in boosting generalization and performance. We open source our code at https://github.com/pkaliosis/fada.
Problem

Research questions and friction points this paper is trying to address.

Addressing character frequency shifts in handwritten text recognition
Improving model accuracy under temporal and contextual dataset shifts
Enhancing existing models without retraining via guided decoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel Wasserstein distance-based loss function
Guided decoding with distribution alignment
Improves models without retraining
🔎 Similar Papers
No similar papers found.
P
Panagiotis Kaliosis
Stony Brook University, New York, USA; Archimedes/Athena RC, Athens, Greece
John Pavlopoulos
John Pavlopoulos
Athens University of Economics and Business
Machine LearningNLPData Science