Towards Calibration Enhanced Network by Inverse Adversarial Attack

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak noise robustness of OCR models in HMI automated testing, this paper proposes a Calibration-Augmented Network based on Reverse Adversarial Attack. Methodologically, we design a novel reverse adversarial objective tailored for HMI tasks, which exploits directional perturbations to probe OCR decision boundaries, and conduct adversarial training with diverse realistic degradations—including blur, occlusion, and illumination distortion. We further construct the first industrial-relevant multi-perturbation HMI screen dataset. Key contributions include: (i) introducing the first reverse adversarial calibration paradigm; (ii) establishing the first industrial-grade HMI perturbation dataset; and (iii) achieving cross-perturbation generalization robustness. Experiments show that our method improves OCR accuracy by 12.6% under strong noise on our custom dataset and attains ≥89.3% recognition accuracy on unseen perturbation types—significantly outperforming existing baselines.

Technology Category

Application Category

📝 Abstract
Test automation has become increasingly important as the complexity of both design and content in Human Machine Interface (HMI) software continues to grow. Current standard practice uses Optical Character Recognition (OCR) techniques to automatically extract textual information from HMI screens for validation. At present, one of the key challenges faced during the automation of HMI screen validation is the noise handling for the OCR models. In this paper, we propose to utilize adversarial training techniques to enhance OCR models in HMI testing scenarios. More specifically, we design a new adversarial attack objective for OCR models to discover the decision boundaries in the context of HMI testing. We then adopt adversarial training to optimize the decision boundaries towards a more robust and accurate OCR model. In addition, we also built an HMI screen dataset based on real-world requirements and applied multiple types of perturbation onto the clean HMI dataset to provide a more complete coverage for the potential scenarios. We conduct experiments to demonstrate how using adversarial training techniques yields more robust OCR models against various kinds of noises, while still maintaining high OCR model accuracy. Further experiments even demonstrate that the adversarial training models exhibit a certain degree of robustness against perturbations from other patterns.
Problem

Research questions and friction points this paper is trying to address.

Enhancing OCR models for HMI testing noise handling
Designing adversarial attack objectives for OCR decision boundaries
Building robust OCR models against diverse perturbations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial training enhances OCR robustness
Inverse adversarial attack optimizes decision boundaries
Perturbed HMI dataset improves scenario coverage
Y
Yupeng Cheng
Nanyang Technological University, Singapore
Z
Zi Pong Lim
Continental Corporation, Singapore
S
Sarthak Ketanbhai Modi
Nanyang Technological University, Singapore
Y
Yon Shin Teo
Continental Corporation, Singapore
Yushi Cao
Yushi Cao
Nanyang Technological University
Deep Reinforcement LearningTrustworthy AI
S
Shang-Wei Lin
Nanyang Technological University, Singapore