R-PGA: Robust Physical Adversarial Camouflage Generation via Relightable 3D Gaussian Splatting

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization of existing physical adversarial camouflage methods in complex dynamic scenes, where geometric and radiometric variations—such as viewpoint shifts, illumination changes, and atmospheric scattering—severely degrade performance. To overcome this, we propose a novel adversarial camouflage generation framework based on relightable 3D Gaussian Splatting. Our approach leverages 3D Gaussian Splatting for high-fidelity scene reconstruction and explicit decoupling of material and lighting, while an image translation model synthesizes context-aware background content. Furthermore, we introduce a hard physical configuration mining mechanism that actively explores worst-case imaging conditions to flatten the loss landscape. This strategy substantially enhances the robustness and stability of adversarial camouflage under diverse physical environments, effectively mitigating the fragility of conventional methods in dynamic settings.
📝 Abstract
Physical adversarial camouflage poses a severe security threat to autonomous driving systems by mapping adversarial textures onto 3D objects. Nevertheless, current methods remain brittle in complex dynamic scenarios, failing to generalize across diverse geometric (e.g., viewing configurations) and radiometric (e.g., dynamic illumination, atmospheric scattering) variations. We attribute this deficiency to two fundamental limitations in simulation and optimization. First, the reliance on coarse, oversimplified simulations (e.g., via CARLA) induces a significant domain gap, confining optimization to a biased feature space. Second, standard strategies targeting average performance result in a rugged loss landscape, leaving the camouflage vulnerable to configuration shifts.To bridge these gaps, we propose the Relightable Physical 3D Gaussian Splatting (3DGS) based Attack framework (R-PGA). Technically, to address the simulation fidelity issue, we leverage 3DGS to ensure photo-realistic reconstruction and augment it with physically disentangled attributes to decouple intrinsic material from lighting. Furthermore, we design a hybrid rendering pipeline that leverages precise Relightable 3DGS for foreground rendering, while employing a pre-trained image translation model to synthesize plausible relighted backgrounds that align with the relighted foreground.To address the optimization robustness issue, we propose the Hard Physical Configuration Mining (HPCM) module, designed to actively mine worst-case physical configurations and suppress their corresponding loss peaks. This strategy not only diminishes the overall loss magnitude but also effectively flattens the rugged loss landscape, ensuring consistent adversarial effectiveness and robustness across varying physical configurations.
Problem

Research questions and friction points this paper is trying to address.

physical adversarial camouflage
robustness
dynamic illumination
viewing configurations
domain gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

Relightable 3D Gaussian Splatting
Physical Adversarial Camouflage
Hard Physical Configuration Mining
Hybrid Rendering Pipeline
Domain Gap Reduction
🔎 Similar Papers
No similar papers found.
T
Tianrui Lou
School of Cyber Science and Technology, Shenzhen Campus, Sun Yat-sen University, Shenzhen 518107, China
Siyuan Liang
Siyuan Liang
College of Computing and Data Science, Nanyang Technological University
Trustworthy Foundation Model
J
Jiawei Liang
School of Cyber Science and Technology, Shenzhen Campus, Sun Yat-sen University, Shenzhen 518107, China
Y
Yuze Gao
School of Intelligent Systems Engineering, Shenzhen Campus, Sun Yat-sen University, Shenzhen 518107, China
Xiaochun Cao
Xiaochun Cao
Sun Yat-sen University
Computer VisionArtificial IntelligenceMultimediaMachine Learning