🤖 AI Summary
Deepfake generators evolve rapidly, making retraining detection models prohibitively expensive. To address this, we propose Real-aware Residual Model Merging (R²M), a training-free parameter-space model fusion framework. R²M is the first to integrate low-rank decomposition with residual merging: it decomposes task vectors into low-rank components, applies layer-wise rank truncation for denoising, and enforces task-specific norm matching—thereby explicitly disentangling and fusing shared authentic features and forgery-specific residuals across expert models. Crucially, R²M supports dynamic expansion, enabling seamless integration of detectors for newly emerging forgery types without retraining. Experiments demonstrate that R²M significantly outperforms joint training and state-of-the-art model merging methods across in-distribution, cross-dataset, and zero-shot unseen forgery scenarios. It achieves superior generalization and scalability while requiring no additional training.
📝 Abstract
Deepfake generators evolve quickly, making exhaustive data collection and repeated retraining impractical. We argue that model merging is a natural fit for deepfake detection: unlike generic multi-task settings with disjoint labels, deepfake specialists share the same binary decision and differ in generator-specific artifacts. Empirically, we show that simple weight averaging preserves Real representations while attenuating Fake-specific cues. Building upon these findings, we propose Real-aware Residual Model Merging (R$^2$M), a training-free parameter-space merging framework. R$^2$M estimates a shared Real component via a low-rank factorization of task vectors, decomposes each specialist into a Real-aligned part and a Fake residual, denoises residuals with layerwise rank truncation, and aggregates them with per-task norm matching to prevent any single generator from dominating. A concise rationale explains why a simple head suffices: the Real component induces a common separation direction in feature space, while truncated residuals contribute only minor off-axis variations. Across in-distribution, cross-dataset, and unseen-dataset, R$^2$M outperforms joint training and other merging baselines. Importantly, R$^2$M is also composable: when a new forgery family appears, we fine-tune one specialist and re-merge, eliminating the need for retraining.