π€ AI Summary
This work addresses the optimization collapse in vision-language modelβbased deepfake detection caused by Sharpness-Aware Minimization when handling non-semantic synthetic artifacts, which severely degrades cross-domain generalization. To this end, we formally establish, for the first time, the relationship between the Critical Optimization Radius (COR) and Gradient Signal-to-Noise Ratio (GSNR), revealing the root cause of such collapse. We propose a training-free Regional Contrastive Gradient Injection mechanism and develop the CoRIT framework, which integrates contrastive gradient surrogates, region-refined masks, regional signal injection, and hierarchical representation fusion. Extensive experiments demonstrate that CoRIT achieves state-of-the-art generalization performance on both cross-domain and universal deepfake detection benchmarks, effectively mitigating optimization collapse.
π Abstract
While Vision-Language Models (VLMs) like CLIP have emerged as a dominant paradigm for generalizable deepfake detection, a representational disconnect remains: their semantic-centric pre-training is ill-suited for capturing non-semantic artifacts inherent to hyper-realistic synthesis. In this work, we identify a failure mode termed Optimization Collapse, where detectors trained with Sharpness-Aware Minimization (SAM) degenerate to random guessing on non-semantic forgeries once the perturbation radius exceeds a narrow threshold. To theoretically formalize this collapse, we propose the Critical Optimization Radius (COR) to quantify the geometric stability of the optimization landscape, and leverage the Gradient Signal-to-Noise Ratio (GSNR) to measure generalization potential. We establish a theorem proving that COR increases monotonically with GSNR, thereby revealing that the geometric instability of SAM optimization originates from degraded intrinsic generalization potential. This result identifies the layer-wise attenuation of GSNR as the root cause of Optimization Collapse in detecting non-semantic forgeries. Although naively reducing perturbation radius yields stable convergence under SAM, it merely treats the symptom without mitigating the intrinsic generalization degradation, necessitating enhanced gradient fidelity. Building on this insight, we propose the Contrastive Regional Injection Transformer (CoRIT), which integrates a computationally efficient Contrastive Gradient Proxy (CGP) with three training-free strategies: Region Refinement Mask to suppress CGP variance, Regional Signal Injection to preserve CGP magnitude, and Hierarchical Representation Integration to attain more generalizable representations. Extensive experiments demonstrate that CoRIT mitigates optimization collapse and achieves state-of-the-art generalization across cross-domain and universal forgery benchmarks.