On the Adversarial Robustness of 3D Large Vision-Language Models

📅 2026-01-10
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The robustness of 3D large vision-language models (VLMs) under adversarial attacks remains largely unexplored, raising concerns about their potential misuse to generate harmful content. This work presents the first systematic evaluation of the adversarial robustness of point cloud–based 3D VLMs, introducing two novel attack strategies: visual attacks that perturb visual token features and caption attacks that manipulate the output token sequence, both evaluated in untargeted and targeted settings. Experimental results reveal that 3D VLMs are highly vulnerable to untargeted attacks yet exhibit greater resilience than their 2D counterparts under targeted attacks. These findings underscore both the promise of 3D VLMs for secure deployment and the critical need for improved robustness mechanisms.

Technology Category

Application Category

📝 Abstract
3D Vision-Language Models (VLMs), such as PointLLM and GPT4Point, have shown strong reasoning and generalization abilities in 3D understanding tasks. However, their adversarial robustness remains largely unexplored. Prior work in 2D VLMs has shown that the integration of visual inputs significantly increases vulnerability to adversarial attacks, making these models easier to manipulate into generating toxic or misleading outputs. In this paper, we investigate whether incorporating 3D vision similarly compromises the robustness of 3D VLMs. To this end, we present the first systematic study of adversarial robustness in point-based 3D VLMs. We propose two complementary attack strategies: \textit{Vision Attack}, which perturbs the visual token features produced by the 3D encoder and projector to assess the robustness of vision-language alignment; and \textit{Caption Attack}, which directly manipulates output token sequences to evaluate end-to-end system robustness. Each attack includes both untargeted and targeted variants to measure general vulnerability and susceptibility to controlled manipulation. Our experiments reveal that 3D VLMs exhibit significant adversarial vulnerabilities under untargeted attacks, while demonstrating greater resilience against targeted attacks aimed at forcing specific harmful outputs, compared to their 2D counterparts. These findings highlight the importance of improving the adversarial robustness of 3D VLMs, especially as they are deployed in safety-critical applications.
Problem

Research questions and friction points this paper is trying to address.

Adversarial Robustness
3D Vision-Language Models
Point Cloud
Adversarial Attacks
Vision-Language Alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D Vision-Language Models
Adversarial Robustness
Vision Attack
Caption Attack
Point Cloud
🔎 Similar Papers
No similar papers found.