VideoPure: Diffusion-based Adversarial Purification for Video Recognition

πŸ“… 2025-01-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Video recognition models are vulnerable to spatiotemporal adversarial attacks, posing serious security risks in real-world deployment. To address this, we propose V-Defenderβ€”the first diffusion-based adversarial purification framework tailored for videos. It innovatively adapts the DDIM diffusion model to the video domain, enabling efficient spatiotemporal adversarial sample reconstruction via temporally consistent DDIM inversion. We further introduce a multi-step intermediate feature-guided joint denoising strategy and a multi-step ensemble voting mechanism to enhance robustness. Extensive experiments demonstrate that V-Defender achieves state-of-the-art robustness under black-box, gray-box, and adaptive attack settings. It consistently outperforms existing video purification methods across multiple benchmarks (Kinetics-400, Something-Something V2) and architectures (I3D, SlowFast, ViT-Vi3D), striking an optimal balance among defense strength, inference efficiency, and cross-architecture generalizability. The source code is publicly available.

Technology Category

Application Category

πŸ“ Abstract
Recent work indicates that video recognition models are vulnerable to adversarial examples, posing a serious security risk to downstream applications. However, current research has primarily focused on adversarial attacks, with limited work exploring defense mechanisms. Furthermore, due to the spatial-temporal complexity of videos, existing video defense methods face issues of high cost, overfitting, and limited defense performance. Recently, diffusion-based adversarial purification methods have achieved robust defense performance in the image domain. However, due to the additional temporal dimension in videos, directly applying these diffusion-based adversarial purification methods to the video domain suffers performance and efficiency degradation. To achieve an efficient and effective video adversarial defense method, we propose the first diffusion-based video purification framework to improve video recognition models' adversarial robustness: VideoPure. Given an adversarial example, we first employ temporal DDIM inversion to transform the input distribution into a temporally consistent and trajectory-defined distribution, covering adversarial noise while preserving more video structure. Then, during DDIM denoising, we leverage intermediate results at each denoising step and conduct guided spatial-temporal optimization, removing adversarial noise while maintaining temporal consistency. Finally, we input the list of optimized intermediate results into the video recognition model for multi-step voting to obtain the predicted class. We investigate the defense performance of our method against black-box, gray-box, and adaptive attacks on benchmark datasets and models. Compared with other adversarial purification methods, our method overall demonstrates better defense performance against different attacks. Our code is available at https://github.com/deep-kaixun/VideoPure.
Problem

Research questions and friction points this paper is trying to address.

Video Recognition
Adversarial Attacks
Model Robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

VideoPure
Adversarial Sample Purification
Optimized Diffusion-based Method
πŸ”Ž Similar Papers
No similar papers found.
Kaixun Jiang
Kaixun Jiang
Fudan University
Computer VisionAdversarial Examples
Zhaoyu Chen
Zhaoyu Chen
TikTok
AI SecurityTrustworthy AIMultimodal AIGenerative AI
Jiyuan Fu
Jiyuan Fu
Fudan University
Lingyi Hong
Lingyi Hong
Fudan University
Computer Vision
J
Jinglun Li
Shanghai Engineering Research Center of AI Robotics, Academy for Engineering & Technology, Fudan University, Shanghai, China, and also with Engineering Research Center of AI & Robotics, Ministry of Education, Academy for Engineering & Technology, Fudan University, Shanghai, China
W
Wenqiang Zhang
Shanghai Engineering Research Center of AI Robotics, Academy for Engineering & Technology, Fudan University, Shanghai, China, and also with Engineering Research Center of AI & Robotics, Ministry of Education, Academy for Engineering & Technology, Fudan University, Shanghai, China; Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China