🤖 AI Summary
To address feature degradation and accuracy loss in facial expression recognition (FER) caused by lossy image compression, this paper proposes an end-to-end learnable image compression framework. The method introduces a task-specific joint optimization objective that integrates feature-aware reconstruction loss with classification-guided supervision, enabling adaptive weighting to balance compression fidelity and discriminative feature preservation. Leveraging a deep learning–based compression backbone, the framework supports both standalone fine-tuning and end-to-end joint training. Experiments show that standalone fine-tuning improves FER accuracy by 0.71% while reducing bit-rate by 49.32%; joint optimization further boosts accuracy by 4.04% and reduces bit-rate by 89.12%, maintaining model stability in both compressed and pixel domains. The core contribution is the first integration of FER-driven discriminative constraints into a learnable compression pipeline, achieving high-accuracy recognition under high compression ratios.
📝 Abstract
Efficient data compression is crucial for the storage and transmission of visual data. However, in facial expression recognition (FER) tasks, lossy compression often leads to feature degradation and reduced accuracy. To address these challenges, this study proposes an end-to-end model designed to preserve critical features and enhance both compression and recognition performance. A custom loss function is introduced to optimize the model, tailored to balance compression and recognition performance effectively. This study also examines the influence of varying loss term weights on this balance. Experimental results indicate that fine-tuning the compression model alone improves classification accuracy by 0.71% and compression efficiency by 49.32%, while joint optimization achieves significant gains of 4.04% in accuracy and 89.12% in efficiency. Moreover, the findings demonstrate that the jointly optimized classification model maintains high accuracy on both compressed and uncompressed data, while the compression model reliably preserves image details, even at high compression rates.