Stacking Brick by Brick: Aligned Feature Isolation for Incremental Face Forgery Detection

📅 2024-11-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses catastrophic forgetting in incremental face forgery detection (IFFD). To balance task-specificity and generalization, we propose a novel framework featuring: (1) a first-of-its-kind feature distribution alignment and isolation mechanism that disentangles latent subspaces per forgery type in a task-wise manner; and (2) a sparse uniform replay (SUR) strategy coupled with a latent-space incremental detector (LID) to enable efficient, continual knowledge accumulation. Evaluated on our newly constructed, state-of-the-art I-FD benchmark, the method achieves significant improvements over existing approaches: +6.8% average detection accuracy and −42% forgetting rate. These results demonstrate superior robustness, scalability, and effective mitigation of catastrophic forgetting under incremental learning settings.

Technology Category

Application Category

📝 Abstract
The rapid advancement of face forgery techniques has introduced a growing variety of forgeries. Incremental Face Forgery Detection (IFFD), involving gradually adding new forgery data to fine-tune the previously trained model, has been introduced as a promising strategy to deal with evolving forgery methods. However, a naively trained IFFD model is prone to catastrophic forgetting when new forgeries are integrated, as treating all forgeries as a single ''Fake"class in the Real/Fake classification can cause different forgery types overriding one another, thereby resulting in the forgetting of unique characteristics from earlier tasks and limiting the model's effectiveness in learning forgery specificity and generality. In this paper, we propose to stack the latent feature distributions of previous and new tasks brick by brick, $ extit{i.e.}$, achieving $ extbf{aligned feature isolation}$. In this manner, we aim to preserve learned forgery information and accumulate new knowledge by minimizing distribution overriding, thereby mitigating catastrophic forgetting. To achieve this, we first introduce Sparse Uniform Replay (SUR) to obtain the representative subsets that could be treated as the uniformly sparse versions of the previous global distributions. We then propose a Latent-space Incremental Detector (LID) that leverages SUR data to isolate and align distributions. For evaluation, we construct a more advanced and comprehensive benchmark tailored for IFFD. The leading experimental results validate the superiority of our method.
Problem

Research questions and friction points this paper is trying to address.

Mitigate catastrophic forgetting in incremental face forgery detection
Preserve learned forgery information while adding new data
Align and isolate latent feature distributions for different forgeries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligned feature isolation mitigates catastrophic forgetting
Sparse Uniform Replay preserves representative subsets
Latent-space Incremental Detector aligns distributions
🔎 Similar Papers
No similar papers found.
J
Jikang Cheng
School of Computer Science, Wuhan University
Z
Zhiyuan Yan
School of Electronic and Computer Engineering, Peking Univerisity Shenzhen Graduate School
Y
Ying Zhang
WeChat, Tencent Inc.
L
Li Hao
School of Electronic and Computer Engineering, Peking Univerisity Shenzhen Graduate School
J
Jiaxin Ai
School of Computer Science, Wuhan University
Qin Zou
Qin Zou
Professor of Computer Science, Wuhan University
Computer VisionPattern RecognitionMachine Learning
C
Chen Li
WeChat, Tencent Inc.
Z
Zhongyuan Wang
School of Computer Science, Wuhan University