RemedyGS: Defend 3D Gaussian Splatting against Computation Cost Attacks

๐Ÿ“… 2025-11-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
3D Gaussian Splatting (3DGS) is vulnerable to computationally intensive adversarial attacks that exhaust system resources and cause denial-of-service, severely compromising its practical deployment security. To address this, we propose the first black-box defense framework specifically designed for 3DGS, comprising two core components: poison-texture detection and image purification. We innovatively integrate adversarial training to align source and target data distributions, significantly enhancing both robustness and reconstruction fidelity. Our deep learningโ€“based method operates without access to model internals, enabling effective identification and restoration of corrupted inputs. Extensive experiments demonstrate consistent defense efficacy under white-box, black-box, and adaptive attack settings, with reconstruction quality degradation below 1.2%. The framework achieves state-of-the-art security performance while maintaining computational efficiency, offering a practical, deployable security solution for real-world 3DGS systems.

Technology Category

Application Category

๐Ÿ“ Abstract
As a mainstream technique for 3D reconstruction, 3D Gaussian splatting (3DGS) has been applied in a wide range of applications and services. Recent studies have revealed critical vulnerabilities in this pipeline and introduced computation cost attacks that lead to malicious resource occupancies and even denial-of-service (DoS) conditions, thereby hindering the reliable deployment of 3DGS. In this paper, we propose the first effective and comprehensive black-box defense framework, named RemedyGS, against such computation cost attacks, safeguarding 3DGS reconstruction systems and services. Our pipeline comprises two key components: a detector to identify the attacked input images with poisoned textures and a purifier to recover the benign images from their attacked counterparts, mitigating the adverse effects of these attacks. Moreover, we incorporate adversarial training into the purifier to enforce distributional alignment between the recovered and original natural images, thereby enhancing the defense efficacy. Experimental results demonstrate that our framework effectively defends against white-box, black-box, and adaptive attacks in 3DGS systems, achieving state-of-the-art performance in both safety and utility.
Problem

Research questions and friction points this paper is trying to address.

Defend 3D Gaussian Splatting against computation cost attacks
Identify and purify attacked input images with poisoned textures
Enhance defense via adversarial training for distributional alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Black-box defense framework against computation cost attacks
Detector identifies poisoned textures in input images
Purifier recovers benign images with adversarial training
๐Ÿ”Ž Similar Papers
Y
Yanping Li
Hong Kong University of Science and Technology
Z
Zhening Liu
Hong Kong University of Science and Technology
Z
Zijian Li
Hong Kong University of Science and Technology
Zehong Lin
Zehong Lin
Research Assistant Professor, Hong Kong University of Science and Technology
Edge AIMachine Learning
J
Jun Zhang
Hong Kong University of Science and Technology