CER-HV: A CER-Based Human-in-the-Loop Framework for Cleaning Datasets Applied to Arabic-Script HTR

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance limitations of Arabic handwritten text recognition (HTR) caused by pervasive label noise in training data—including transcription, segmentation, orientation, and non-text errors. The authors propose CER-HV, a novel framework that integrates character error rate (CER)-driven sample ranking with human-in-the-loop validation to systematically detect and clean label noise in multilingual Arabic datasets. Leveraging a CRNN-based CER estimator with early stopping, the method identifies erroneous samples with 90% precision on the Muharaf dataset and 80–86% on PHTI. After cleaning with CER-HV, HTR models achieve a 1.0–1.8% absolute CER reduction and establish state-of-the-art results across multiple benchmarks, offering a general and efficient paradigm for denoising high-noise handwritten text data.

Technology Category

Application Category

📝 Abstract
Handwritten text recognition (HTR) for Arabic-script languages still lags behind Latin-script HTR, despite recent advances in model architectures, datasets, and benchmarks. We show that data quality is a significant limiting factor in many published datasets and propose CER-HV (CER-based Ranking with Human Verification) as a framework to detect and clean label errors. CER-HV combines a CER-based noise detector, built on a carefully configured Convolutional Recurrent Neural Network (CRNN) with early stopping to avoid overfitting noisy samples, and a human-in-the-loop (HITL) step that verifies high-ranking samples. The framework reveals that several existing datasets contain previously underreported problems, including transcription, segmentation, orientation, and non-text content errors. These have been identified with up to 90 percent precision in the Muharaf and 80-86 percent in the PHTI datasets. We also show that our CRNN achieves state-of-the-art performance across five of the six evaluated datasets, reaching 8.45 percent Character Error Rate (CER) on KHATT (Arabic), 8.26 percent on PHTI (Pashto), 10.66 percent on Ajami, and 10.11 percent on Muharaf (Arabic), all without any data cleaning. We establish a new baseline of 11.3 percent CER on the PHTD (Persian) dataset. Applying CER-HV improves the evaluation CER by 0.3-0.6 percent on the cleaner datasets and 1.0-1.8 percent on the noisier ones. Although our experiments focus on documents written in an Arabic-script language, including Arabic, Persian, Urdu, Ajami, and Pashto, the framework is general and can be applied to other text recognition datasets.
Problem

Research questions and friction points this paper is trying to address.

Arabic-script HTR
data quality
label errors
dataset cleaning
Character Error Rate
Innovation

Methods, ideas, or system contributions that make the work stand out.

CER-HV
human-in-the-loop
label error detection
Arabic-script HTR
CRNN
S
S. Al-Azzawi
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, 97187, Sweden
E
Elisa Barney
Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, 97187, Sweden
Marcus Liwicki
Marcus Liwicki
Luleå University of Technology, EISLAB, Machine Learning, Sweden
Deep LearningArtificial IntelligenceDocument AnalysisPattern RecognitionApplied AI