Open-World Deepfake Attribution via Confidence-Aware Asymmetric Learning

๐Ÿ“… 2025-12-14
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Open-world deepfake attribution (OW-DFA) faces two critical challenges: (1) unreliable pseudo-labels due to confidence bias, leading to training instability; and (2) unknown numbers of unseen forgery typesโ€”violating practical deployment assumptions. To address these, we propose Confidence-Aware Asymmetric Learning (CAL), the first framework integrating dynamic consistency regularization and asymmetric confidence boosting to mitigate pseudo-label noise. CAL further introduces Dynamic Prototype Pruning (DPP), a prior-free strategy that adaptively estimates the number of unknown classes via iterative prototype refinement. The method unifies confidence-adaptive loss scaling, selective high-confidence learning, prototype-based clustering, and semi-supervised contrastive consistency modeling. Extensive experiments on standard and extended OW-DFA benchmarks demonstrate that CAL consistently surpasses state-of-the-art methods, significantly improving attribution accuracy for both known and unknown forgery types while ensuring robustness and deployability.

Technology Category

Application Category

๐Ÿ“ Abstract
The proliferation of synthetic facial imagery has intensified the need for robust Open-World DeepFake Attribution (OW-DFA), which aims to attribute both known and unknown forgeries using labeled data for known types and unlabeled data containing a mixture of known and novel types. However, existing OW-DFA methods face two critical limitations: 1) A confidence skew that leads to unreliable pseudo-labels for novel forgeries, resulting in biased training. 2) An unrealistic assumption that the number of unknown forgery types is known *a priori*. To address these challenges, we propose a Confidence-Aware Asymmetric Learning (CAL) framework, which adaptively balances model confidence across known and novel forgery types. CAL mainly consists of two components: Confidence-Aware Consistency Regularization (CCR) and Asymmetric Confidence Reinforcement (ACR). CCR mitigates pseudo-label bias by dynamically scaling sample losses based on normalized confidence, gradually shifting the training focus from high- to low-confidence samples. ACR complements this by separately calibrating confidence for known and novel classes through selective learning on high-confidence samples, guided by their confidence gap. Together, CCR and ACR form a mutually reinforcing loop that significantly improves the model's OW-DFA performance. Moreover, we introduce a Dynamic Prototype Pruning (DPP) strategy that automatically estimates the number of novel forgery types in a coarse-to-fine manner, removing the need for unrealistic prior assumptions and enhancing the scalability of our methods to real-world OW-DFA scenarios. Extensive experiments on the standard OW-DFA benchmark and a newly extended benchmark incorporating advanced manipulations demonstrate that CAL consistently outperforms previous methods, achieving new state-of-the-art performance on both known and novel forgery attribution.
Problem

Research questions and friction points this paper is trying to address.

Addresses confidence skew in pseudo-labeling for novel deepfakes
Removes unrealistic prior assumption on unknown forgery type count
Improves open-world deepfake attribution for known and novel manipulations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Confidence-Aware Asymmetric Learning balances model confidence adaptively
Dynamic Prototype Pruning estimates novel forgery types automatically
Confidence-Aware Consistency Regularization mitigates pseudo-label bias dynamically
๐Ÿ”Ž Similar Papers
No similar papers found.