A Survey of Defenses against AI-generated Visual Media: Detection, Disruption, and Authentication

📅 2024-07-15
🏛️ arXiv.org
📈 Citations: 11
Influential: 5
📄 PDF
🤖 AI Summary
To address risks—including disinformation, deception, and copyright infringement—arising from the misuse of deep generative models, this paper systematically surveys defense techniques against AI-generated visual content, focusing on detection, perturbation-based interference, and content authentication. We propose the first end-to-end, passive–active collaborative defense framework; introduce a cross-task transferable methodology taxonomy; and integrate trustworthiness evaluation dimensions—robustness, fairness, and verifiability. Through comprehensive literature analysis, formal modeling, and benchmark evaluation, we distill the technical evolution trajectory, identify core challenges (e.g., insufficient robustness and poor generalization), and outline future directions, including human–AI co-verification and cross-domain transferable defenses. (136 words)

Technology Category

Application Category

📝 Abstract
Deep generative models have demonstrated impressive performance in various computer vision applications, including image synthesis, video generation, and medical analysis. Despite their significant advancements, these models may be used for malicious purposes, such as misinformation, deception, and copyright violation. In this paper, we provide a systematic and timely review of research efforts on defenses against AI-generated visual media, covering detection, disruption, and authentication. We review existing methods and summarize the mainstream defense-related tasks within a unified passive and proactive framework. Moreover, we survey the derivative tasks concerning the trustworthiness of defenses, such as their robustness and fairness. For each task, we formulate its general pipeline and propose a taxonomy based on methodological strategies that are uniformly applicable to the primary subtasks. Additionally, we summarize the commonly used evaluation datasets, criteria, and metrics. Finally, by analyzing the reviewed studies, we provide insights into current research challenges and suggest possible directions for future research.
Problem

Research questions and friction points this paper is trying to address.

Surveying defense methods against malicious AI-generated visual media
Reviewing detection, disruption and authentication for synthetic content
Analyzing robustness and fairness of AI-generated media defenses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic review of AI-generated media defenses
Unified framework for passive and proactive protection
Taxonomy based on methodological strategies for tasks
🔎 Similar Papers
No similar papers found.
J
Jingyi Deng
Faculty of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an, 710049 China
C
Chenhao Lin
Faculty of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an, 710049 China
Zhengyu Zhao
Zhengyu Zhao
Xi'an Jiaotong University, China
Adversarial Machine LearningComputer Vision
S
Shuai Liu
Faculty of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an, 710049 China
Q
Qian Wang
School of Cyber Science and Engineering, Wuhan University, Wuhan, 430072 China
C
Chao Shen
Faculty of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an, 710049 China