π€ AI Summary
Video recognition models are vulnerable to spatiotemporal adversarial attacks, posing serious security risks in real-world deployment. To address this, we propose V-Defenderβthe first diffusion-based adversarial purification framework tailored for videos. It innovatively adapts the DDIM diffusion model to the video domain, enabling efficient spatiotemporal adversarial sample reconstruction via temporally consistent DDIM inversion. We further introduce a multi-step intermediate feature-guided joint denoising strategy and a multi-step ensemble voting mechanism to enhance robustness. Extensive experiments demonstrate that V-Defender achieves state-of-the-art robustness under black-box, gray-box, and adaptive attack settings. It consistently outperforms existing video purification methods across multiple benchmarks (Kinetics-400, Something-Something V2) and architectures (I3D, SlowFast, ViT-Vi3D), striking an optimal balance among defense strength, inference efficiency, and cross-architecture generalizability. The source code is publicly available.
π Abstract
Recent work indicates that video recognition models are vulnerable to adversarial examples, posing a serious security risk to downstream applications. However, current research has primarily focused on adversarial attacks, with limited work exploring defense mechanisms. Furthermore, due to the spatial-temporal complexity of videos, existing video defense methods face issues of high cost, overfitting, and limited defense performance. Recently, diffusion-based adversarial purification methods have achieved robust defense performance in the image domain. However, due to the additional temporal dimension in videos, directly applying these diffusion-based adversarial purification methods to the video domain suffers performance and efficiency degradation. To achieve an efficient and effective video adversarial defense method, we propose the first diffusion-based video purification framework to improve video recognition models' adversarial robustness: VideoPure. Given an adversarial example, we first employ temporal DDIM inversion to transform the input distribution into a temporally consistent and trajectory-defined distribution, covering adversarial noise while preserving more video structure. Then, during DDIM denoising, we leverage intermediate results at each denoising step and conduct guided spatial-temporal optimization, removing adversarial noise while maintaining temporal consistency. Finally, we input the list of optimized intermediate results into the video recognition model for multi-step voting to obtain the predicted class. We investigate the defense performance of our method against black-box, gray-box, and adaptive attacks on benchmark datasets and models. Compared with other adversarial purification methods, our method overall demonstrates better defense performance against different attacks. Our code is available at https://github.com/deep-kaixun/VideoPure.