VideoMarkBench: Benchmarking Robustness of Video Watermarking

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The surge in AI-generated videos has intensified risks of misinformation and copyright infringement, yet existing video watermarking methods lack systematic robustness evaluation under common and adversarial perturbations. Method: This paper introduces the first benchmark dedicated to evaluating watermark robustness in videos, unifying three state-of-the-art generative models, four watermarking algorithms, seven detection aggregation strategies, and comprehensively covering white-box, black-box, and no-box threat models alongside twelve perturbation types. Contribution/Results: Experiments reveal that mainstream watermarks suffer an average detection accuracy drop exceeding 40% under compression, frame-rate adjustment, and adversarial attacks—highlighting widespread fragility. The benchmark establishes a new paradigm for standardized robustness assessment in video watermarking and publicly releases code and datasets to foster reproducible, community-wide evaluation.

Technology Category

Application Category

📝 Abstract
The rapid development of video generative models has led to a surge in highly realistic synthetic videos, raising ethical concerns related to disinformation and copyright infringement. Recently, video watermarking has been proposed as a mitigation strategy by embedding invisible marks into AI-generated videos to enable subsequent detection. However, the robustness of existing video watermarking methods against both common and adversarial perturbations remains underexplored. In this work, we introduce VideoMarkBench, the first systematic benchmark designed to evaluate the robustness of video watermarks under watermark removal and watermark forgery attacks. Our study encompasses a unified dataset generated by three state-of-the-art video generative models, across three video styles, incorporating four watermarking methods and seven aggregation strategies used during detection. We comprehensively evaluate 12 types of perturbations under white-box, black-box, and no-box threat models. Our findings reveal significant vulnerabilities in current watermarking approaches and highlight the urgent need for more robust solutions. Our code is available at https://github.com/zhengyuan-jiang/VideoMarkBench.
Problem

Research questions and friction points this paper is trying to address.

Assessing video watermark robustness against common and adversarial attacks
Evaluating detection under diverse perturbations and threat models
Identifying vulnerabilities in current AI-generated video watermarking methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarking video watermark robustness systematically
Evaluating under diverse attacks and perturbations
Unified dataset with multiple generative models
🔎 Similar Papers
No similar papers found.