Immunizing 3D Gaussian Generative Models Against Unauthorized Fine-Tuning via Attribute-Space Traps

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the growing risk of intellectual property leakage in 3D Gaussian generative models due to unauthorized fine-tuning of publicly released pre-trained weights. To counter this threat, we propose GaussLock, the first anti-fine-tuning defense framework specifically designed for 3D Gaussian models. GaussLock embeds structured perturbations into the parameter space through authorized knowledge distillation and an attribute-aware trap loss that targets key geometric and appearance attributes—including position, scale, rotation, opacity, and color. This approach preserves performance on legitimate fine-tuning tasks while significantly degrading the geometric coherence and visual fidelity of unauthorized reconstructions. Extensive experiments demonstrate that GaussLock effectively increases LPIPS and reduces PSNR on large-scale models, thereby providing active immunity against illicit model adaptation.

Technology Category

Application Category

📝 Abstract
Recent large-scale generative models enable high-quality 3D synthesis. However, the public accessibility of pre-trained weights introduces a critical vulnerability. Adversaries can fine-tune these models to steal specialized knowledge acquired during pre-training, leading to intellectual property infringement. Unlike defenses for 2D images and language models, 3D generators require specialized protection due to their explicit Gaussian representations, which expose fundamental structural parameters directly to gradient-based optimization. We propose GaussLock, the first approach designed to defend 3D generative models against fine-tuning attacks. GaussLock is a lightweight parameter-space immunization framework that integrates authorized distillation with attribute-aware trap losses targeting position, scale, rotation, opacity, and color. Specifically, these traps systematically collapse spatial distributions, distort geometric shapes, align rotational axes, and suppress primitive visibility to fundamentally destroy structural integrity. By jointly optimizing these dual objectives, the distillation process preserves fidelity on authorized tasks while the embedded traps actively disrupt unauthorized reconstructions. Experiments on large-scale Gaussian models demonstrate that GaussLock effectively neutralizes unauthorized fine-tuning attacks. It substantially degrades the quality of unauthorized reconstructions, evidenced by significantly higher LPIPS and lower PSNR, while effectively maintaining performance on authorized fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

3D generative models
unauthorized fine-tuning
intellectual property protection
Gaussian representations
model security
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D generative models
fine-tuning defense
attribute-space traps
Gaussian splatting
model immunization
🔎 Similar Papers
No similar papers found.
Jianwei Zhang
Jianwei Zhang
Professor, School of Education, University at Albany, SUNY
CSCLlearning sciencestechnology for creativityknowledge buildinginquiry-based learning
S
Sihan Cao
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
Chaoning Zhang
Chaoning Zhang
Professor at UESTC (电子科技大学, China)
Computer VisionLLM and VLMGenAI and AIGC Detection
Ziming Hong
Ziming Hong
The University of Sydney
Trustworthy AI
Jiaxin Huang
Jiaxin Huang
MBUZAI
Machine LearningMedical Image Analysis3D Vision
P
Pengcheng Zheng
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
C
Caiyan Qin
School of Robotics and Advanced Manufacturing, Harbin Institute of Technology, Shenzhen 518055, China
Wei Dong
Wei Dong
PHD candidate, School of Computer Science and Engineering, Northwestern Polytechnical University,
Deep Learning
Y
Yang Yang
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
Tongliang Liu
Tongliang Liu
Director, Sydney AI Centre, University of Sydney & Mohamed bin Zayed University of AI
Machine LearningLearning with Noisy LabelsTrustworthy Machine Learning