Color histogram equalization and fine-tuning to improve expression recognition of (partially occluded) faces on sign language datasets

📅 2025-07-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the fine-grained facial expression classification capability of computer vision models on sign language datasets, specifically addressing intergroup differences in emotional expression between hearing and deaf individuals. To address the distinct color distribution in signers’ facial images, we propose a histogram-equalization-based color normalization method. Additionally, we integrate deep network fine-tuning with upper- and lower-face region–specific feature modeling to enhance robustness under occlusion. Experimental results show an average model sensitivity of 83.8% (class-wise variance: 0.042); upper-face recognition accuracy reaches 77.9%, and lower-face accuracy 79.6%—both approaching or exceeding human performance. The key contributions are twofold: (1) the first systematic validation of hemi-facial partitioning for deaf expression recognition, and (2) significant improvement in cross-subject generalization via color normalization.

Technology Category

Application Category

📝 Abstract
The goal of this investigation is to quantify to what extent computer vision methods can correctly classify facial expressions on a sign language dataset. We extend our experiments by recognizing expressions using only the upper or lower part of the face, which is needed to further investigate the difference in emotion manifestation between hearing and deaf subjects. To take into account the peculiar color profile of a dataset, our method introduces a color normalization stage based on histogram equalization and fine-tuning. The results show the ability to correctly recognize facial expressions with 83.8% mean sensitivity and very little variance (.042) among classes. Like for humans, recognition of expressions from the lower half of the face (79.6%) is higher than that from the upper half (77.9%). Noticeably, the classification accuracy from the upper half of the face is higher than human level.
Problem

Research questions and friction points this paper is trying to address.

Classify facial expressions in sign language datasets accurately
Recognize expressions using only upper or lower face parts
Improve recognition with color normalization and fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Color histogram equalization for normalization
Fine-tuning to adapt color profiles
Partial face recognition for expression classification
🔎 Similar Papers
No similar papers found.