π€ AI Summary
This study uncovers a novel adversarial threat against open-source video foundation models (VFMs) under zero-task-knowledge conditionsβi.e., without access to downstream tasks, training data, model architectures, or query interfaces. To address this, we propose the Temporal-Aware Adversarial (TVA) attack framework, which integrates bidirectional contrastive learning with a temporal consistency loss to generate cross-task transferable adversarial videos using only pretrained VFMs. Experiments across 24 diverse video understanding tasks demonstrate that TVA significantly outperforms conventional transfer-based attacks. It achieves high attack success rates both during fine-tuning of downstream models and multimodal large language models (MLLMs). Crucially, our results provide the first empirical evidence that pretrained VFM representations themselves exhibit inherent and substantial adversarial vulnerability. This work establishes a new paradigm for security assessment in the open video model ecosystem.
π Abstract
Large-scale Video Foundation Models (VFMs) has significantly advanced various video-related tasks, either through task-specific models or Multi-modal Large Language Models (MLLMs). However, the open accessibility of VFMs also introduces critical security risks, as adversaries can exploit full knowledge of the VFMs to launch potent attacks. This paper investigates a novel and practical adversarial threat scenario: attacking downstream models or MLLMs fine-tuned from open-source VFMs, without requiring access to the victim task, training data, model query, and architecture. In contrast to conventional transfer-based attacks that rely on task-aligned surrogate models, we demonstrate that adversarial vulnerabilities can be exploited directly from the VFMs. To this end, we propose the Transferable Video Attack (TVA), a temporal-aware adversarial attack method that leverages the temporal representation dynamics of VFMs to craft effective perturbations. TVA integrates a bidirectional contrastive learning mechanism to maximize the discrepancy between the clean and adversarial features, and introduces a temporal consistency loss that exploits motion cues to enhance the sequential impact of perturbations. TVA avoids the need to train expensive surrogate models or access to domain-specific data, thereby offering a more practical and efficient attack strategy. Extensive experiments across 24 video-related tasks demonstrate the efficacy of TVA against downstream models and MLLMs, revealing a previously underexplored security vulnerability in the deployment of video models.