Investigating Vulnerabilities and Defenses Against Audio-Visual Attacks: A Comprehensive Survey Emphasizing Multimodal Models

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically surveys security threats against audio-visual multimodal large language models (MLLMs), focusing on three core attack categories: adversarial attacks, backdoor attacks, and jailbreaking attacks. To address the lack of a unified cross-modal analytical framework in prior research, we propose the first three-dimensional taxonomy—spanning attack objectives, trigger mechanisms, and modality-coupling characteristics—and uniquely incorporate audio-visual-specific threats, such as cross-modal prompt injection and audio-visual协同 perturbations. Through bibliometric analysis and technical root-cause attribution, we identify seven fundamental vulnerability sources and five critical limitations of prevailing defense mechanisms. Finally, we outline a forward-looking research roadmap centered on robust cross-modal alignment and trustworthy multimodal reasoning. This survey constitutes the first comprehensive, systematic review of security challenges in audio-visual MLLMs, thereby filling a significant gap in the literature.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs), which bridge the gap between audio-visual and natural language processing, achieve state-of-the-art performance on several audio-visual tasks. Despite the superior performance of MLLMs, the scarcity of high-quality audio-visual training data and computational resources necessitates the utilization of third-party data and open-source MLLMs, a trend that is increasingly observed in contemporary research. This prosperity masks significant security risks. Empirical studies demonstrate that the latest MLLMs can be manipulated to produce malicious or harmful content. This manipulation is facilitated exclusively through instructions or inputs, including adversarial perturbations and malevolent queries, effectively bypassing the internal security mechanisms embedded within the models. To gain a deeper comprehension of the inherent security vulnerabilities associated with audio-visual-based multimodal models, a series of surveys investigates various types of attacks, including adversarial and backdoor attacks. While existing surveys on audio-visual attacks provide a comprehensive overview, they are limited to specific types of attacks, which lack a unified review of various types of attacks. To address this issue and gain insights into the latest trends in the field, this paper presents a comprehensive and systematic review of audio-visual attacks, which include adversarial attacks, backdoor attacks, and jailbreak attacks. Furthermore, this paper also reviews various types of attacks in the latest audio-visual-based MLLMs, a dimension notably absent in existing surveys. Drawing upon comprehensive insights from a substantial review, this paper delineates both challenges and emergent trends for future research on audio-visual attacks and defense.
Problem

Research questions and friction points this paper is trying to address.

Investigating security risks in multimodal large language models
Reviewing diverse audio-visual attack types comprehensively
Addressing vulnerabilities in third-party data and open-source models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes multimodal large language models (MLLMs)
Investigates adversarial, backdoor, and jailbreak attacks
Reviews audio-visual attacks comprehensively and systematically