DevFD: Developmental Face Forgery Detection by Learning Shared and Orthogonal LoRA Subspaces

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge that digital face forgery detection models suffer from catastrophic forgetting of previously encountered forgery types and struggle to adapt to newly emerging ones due to the rapid evolution of face manipulation techniques, this work pioneers modeling face forgery detection as a developmental continual learning problem. We propose a Mixture-of-Experts architecture—Real-LoRA + Fake-LoRAs—built upon orthogonal LoRA subspaces. It employs a knowledge separation mechanism to decouple authentic and forged facial representations and introduces an orthogonal gradient constraint loss to mitigate parameter interference. Our approach achieves parameter-efficient fine-tuning while balancing incremental learning of novel forgery types with long-term stability across tasks. Extensive experiments under multi-dataset and forgery-type incremental protocols demonstrate significant improvements over state-of-the-art methods, effectively alleviating catastrophic forgetting and enhancing cross-domain generalization.

Technology Category

Application Category

📝 Abstract
The rise of realistic digital face generation and manipulation poses significant social risks. The primary challenge lies in the rapid and diverse evolution of generation techniques, which often outstrip the detection capabilities of existing models. To defend against the ever-evolving new types of forgery, we need to enable our model to quickly adapt to new domains with limited computation and data while avoiding forgetting previously learned forgery types. In this work, we posit that genuine facial samples are abundant and relatively stable in acquisition methods, while forgery faces continuously evolve with the iteration of manipulation techniques. Given the practical infeasibility of exhaustively collecting all forgery variants, we frame face forgery detection as a continual learning problem and allow the model to develop as new forgery types emerge. Specifically, we employ a Developmental Mixture of Experts (MoE) architecture that uses LoRA models as its individual experts. These experts are organized into two groups: a Real-LoRA to learn and refine knowledge of real faces, and multiple Fake-LoRAs to capture incremental information from different forgery types. To prevent catastrophic forgetting, we ensure that the learning direction of Fake-LoRAs is orthogonal to the established subspace. Moreover, we integrate orthogonal gradients into the orthogonal loss of Fake-LoRAs, preventing gradient interference throughout the training process of each task. Experimental results under both the datasets and manipulation types incremental protocols demonstrate the effectiveness of our method.
Problem

Research questions and friction points this paper is trying to address.

Detecting rapidly evolving face forgery techniques that outpace existing models
Enabling models to adapt to new forgery types without forgetting previous knowledge
Addressing practical infeasibility of collecting all forgery variants through continual learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Developmental Mixture of Experts architecture
Organizes LoRA experts into Real-LoRA and Fake-LoRAs
Employs orthogonal loss to prevent catastrophic forgetting
🔎 Similar Papers
No similar papers found.