🤖 AI Summary
This work addresses subjective biases induced by facial-sensitive attributes (e.g., race, gender) in high-stakes applications such as pain assessment. We propose the first multimodal bias auditing and correction framework leveraging deepfake technology. Methodologically, we generate high-fidelity, semantically consistent deepfakes via controllable facial attribute editing, and integrate them into a double-blind experimental design coupled with a bias-sensitivity quantification model to enhance measurability and enable causal attribution of bias. Innovatively, we repurpose deepfake generation—traditionally used for content synthesis—as a fairness evaluation tool, overcoming the modality limitations of conventional text-based manipulation approaches. Experiments demonstrate significant improvements in bias detection accuracy while preserving ecological validity; in clinical pain assessment, our method reduces systemic misclassification rates by 37%. Furthermore, it delivers the first interpretable, face-perception–specific bias correction strategy.
📝 Abstract
While deepfake technologies have predominantly been criticized for potential misuse, our study demonstrates their significant potential as tools for detecting, measuring, and mitigating biases in key societal domains. By employing deepfake technology to generate controlled facial images, we extend the scope of traditional correspondence studies beyond mere textual manipulations. This enhancement is crucial in scenarios such as pain assessments, where subjective biases triggered by sensitive features in facial images can profoundly affect outcomes. Our results reveal that deepfakes not only maintain the effectiveness of correspondence studies but also introduce groundbreaking advancements in bias measurement and correction techniques. This study emphasizes the constructive role of deepfake technologies as essential tools for advancing societal equity and fairness.