ComplicitSplat: Downstream Models are Vulnerable to Blackbox Attacks by 3D Gaussian Splat Camouflages

📅 2025-08-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
As 3D Gaussian Splatting (3DGS) gains traction in safety-critical applications like autonomous driving, this work uncovers a novel physical-domain attack surface: adversaries—without access to model internals—can exploit 3DGS’s view-dependent shading mechanism to embed adversarial colors and textures on object surfaces that are visible only from specific viewpoints, thereby launching black-box attacks against downstream object detectors. Method: Our approach integrates differentiable 3DGS rendering, viewpoint-aware texture optimization, and a generic black-box attack framework, enabling transferable attacks across diverse detector architectures—including one-stage (e.g., YOLOv8), two-stage (e.g., Faster R-CNN), and Transformer-based (e.g., DETR) models. Contribution/Results: We demonstrate successful attacks on real-world captured and synthetic data, achieving high transferability and practical feasibility. This is the first systematic study to expose structural security vulnerabilities inherent in 3D-reconstruction-driven perception systems.

Technology Category

Application Category

📝 Abstract
As 3D Gaussian Splatting (3DGS) gains rapid adoption in safety-critical tasks for efficient novel-view synthesis from static images, how might an adversary tamper images to cause harm? We introduce ComplicitSplat, the first attack that exploits standard 3DGS shading methods to create viewpoint-specific camouflage - colors and textures that change with viewing angle - to embed adversarial content in scene objects that are visible only from specific viewpoints and without requiring access to model architecture or weights. Our extensive experiments show that ComplicitSplat generalizes to successfully attack a variety of popular detector - both single-stage, multi-stage, and transformer-based models on both real-world capture of physical objects and synthetic scenes. To our knowledge, this is the first black-box attack on downstream object detectors using 3DGS, exposing a novel safety risk for applications like autonomous navigation and other mission-critical robotic systems.
Problem

Research questions and friction points this paper is trying to address.

Exploits 3DGS shading to create viewpoint-specific adversarial camouflage
Attacks blackbox object detectors without model architecture access
Exposes safety risks in autonomous navigation systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exploits 3DGS shading for camouflage
Embeds viewpoint-specific adversarial content
Black-box attack without model access
🔎 Similar Papers
No similar papers found.