GaussTrap: Stealthy Poisoning Attacks on 3D Gaussian Splatting for Targeted Scene Confusion

๐Ÿ“… 2025-04-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work้ฆ–ๆฌก identifies and systematically characterizes backdoor vulnerabilities in 3D Gaussian Splatting (3DGS) for safety-critical applications such as autonomous driving and AR/VR. To exploit its view-dependent rendering mechanism, we propose the first stealthy poisoning attack framework tailored to 3DGS: it injects perspective-level controllable perturbations guided by rendering gradients, employs a three-stage co-optimization (attack, stabilization, and clean training), and enforces viewpoint-aware regularization alongside perceptual consistency loss. Our method achieves high attack success rates (>92%) in targeted views while preserving reconstruction fidelity in benign views (PSNR degradation <0.3 dB). Extensive experiments on both synthetic and real-world datasets confirm the attackโ€™s visual imperceptibility and strong generalization across scenes and viewpoints. This is the first empirical demonstration of security risks in neural 3D reconstruction models, establishing a critical threat benchmark and informing defense strategies for trustworthy neural rendering.

Technology Category

Application Category

๐Ÿ“ Abstract
As 3D Gaussian Splatting (3DGS) emerges as a breakthrough in scene representation and novel view synthesis, its rapid adoption in safety-critical domains (e.g., autonomous systems, AR/VR) urgently demands scrutiny of potential security vulnerabilities. This paper presents the first systematic study of backdoor threats in 3DGS pipelines. We identify that adversaries may implant backdoor views to induce malicious scene confusion during inference, potentially leading to environmental misperception in autonomous navigation or spatial distortion in immersive environments. To uncover this risk, we propose GuassTrap, a novel poisoning attack method targeting 3DGS models. GuassTrap injects malicious views at specific attack viewpoints while preserving high-quality rendering in non-target views, ensuring minimal detectability and maximizing potential harm. Specifically, the proposed method consists of a three-stage pipeline (attack, stabilization, and normal training) to implant stealthy, viewpoint-consistent poisoned renderings in 3DGS, jointly optimizing attack efficacy and perceptual realism to expose security risks in 3D rendering. Extensive experiments on both synthetic and real-world datasets demonstrate that GuassTrap can effectively embed imperceptible yet harmful backdoor views while maintaining high-quality rendering in normal views, validating its robustness, adaptability, and practical applicability.
Problem

Research questions and friction points this paper is trying to address.

Study backdoor threats in 3D Gaussian Splatting pipelines
Propose stealthy poisoning attacks causing targeted scene confusion
Ensure attack efficacy while maintaining high-quality normal rendering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Poisoning attack targets 3D Gaussian Splatting models
Injects malicious views at specific attack viewpoints
Ensures stealth with high-quality non-target rendering
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Jiaxin Hong
Harbin Institute of Technology(Shenzhen), Shenzhen, China
S
Sixu Chen
South China University of Technology, Guangzhou, China
Shuoyang Sun
Shuoyang Sun
Harbin Institute of Technology, Shenzhen
LLM3DAI Security
Hongyao Yu
Hongyao Yu
Tsinghua University
machine learningcomputer visionAI security
H
Hao Fang
Shenzhen Internation Graduate School, Tsinghua University, Shenzhen, China
Y
Yuqi Tan
Shenzhen Internation Graduate School, Tsinghua University, Shenzhen, China
B
Bin Chen
Harbin Institute of Technology, Shenzhen, China
S
Shuhan Qi
Harbin Institute of Technology, Shenzhen, China
J
Jiawei Li
Huawei Manufacturing, Shenzhen, China