AdLift: Lifting Adversarial Perturbations to Safeguard 3D Gaussian Splatting Assets Against Instruction-Driven Editing

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address security vulnerabilities of 3D Gaussian Splatting (3DGS) assets under instruction-driven editing, this paper proposes the first cross-view robust defense method specifically designed for 3D Gaussian point clouds. The core innovation lies in lifting 2D adversarial perturbations into the 3D Gaussian parameter space and optimizing them via a customized Lifted Projected Gradient Descent (PGD) algorithm—incorporating gradient clipping, projection constraints, and alternating image-Gaussian fitting. This enables strong resilience against both arbitrary-view rendering and novel-view synthesis under adversarial instructions. Crucially, the injected perturbations remain visually imperceptible while significantly enhancing robustness against state-of-the-art 2D- and 3D-based instruction-editing attacks. Moreover, the method preserves high-fidelity rendering and visual quality without degradation. Experimental results demonstrate substantial improvements in cross-view adversarial robustness, establishing a new foundation for secure, editable 3DGS representations.

Technology Category

Application Category

📝 Abstract
Recent studies have extended diffusion-based instruction-driven 2D image editing pipelines to 3D Gaussian Splatting (3DGS), enabling faithful manipulation of 3DGS assets and greatly advancing 3DGS content creation. However, it also exposes these assets to serious risks of unauthorized editing and malicious tampering. Although imperceptible adversarial perturbations against diffusion models have proven effective for protecting 2D images, applying them to 3DGS encounters two major challenges: view-generalizable protection and balancing invisibility with protection capability. In this work, we propose the first editing safeguard for 3DGS, termed AdLift, which prevents instruction-driven editing across arbitrary views and dimensions by lifting strictly bounded 2D adversarial perturbations into 3D Gaussian-represented safeguard. To ensure both adversarial perturbations effectiveness and invisibility, these safeguard Gaussians are progressively optimized across training views using a tailored Lifted PGD, which first conducts gradient truncation during back-propagation from the editing model at the rendered image and applies projected gradients to strictly constrain the image-level perturbation. Then, the resulting perturbation is backpropagated to the safeguard Gaussian parameters via an image-to-Gaussian fitting operation. We alternate between gradient truncation and image-to-Gaussian fitting, yielding consistent adversarial-based protection performance across different viewpoints and generalizes to novel views. Empirically, qualitative and quantitative results demonstrate that AdLift effectively protects against state-of-the-art instruction-driven 2D image and 3DGS editing.
Problem

Research questions and friction points this paper is trying to address.

Protects 3D Gaussian Splatting assets from unauthorized editing
Lifts 2D adversarial perturbations to 3D for view-consistent protection
Balances perturbation invisibility with effective editing prevention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lifts 2D adversarial perturbations into 3D Gaussian safeguard
Uses tailored Lifted PGD with gradient truncation and fitting
Ensures view-generalizable protection balancing invisibility and effectiveness
🔎 Similar Papers
No similar papers found.