π€ AI Summary
This work addresses the vulnerability of self-supervised diffusion models to backdoor attacks at the representation layer and proposes the first attack method that hijacks trigger sample representations toward a target image in the PCA semantic space. By imposing coordinated constraints in the latent space, pixel space, and feature distribution space, and introducing a representation dispersion regularization to enhance stealthiness, the approach achieves high-precision triggering while preserving the modelβs normal functionality and high usability. Extensive experiments demonstrate that the proposed attack significantly outperforms existing methods across multiple benchmark datasets, achieving superior FID and MSE metrics. Moreover, it reliably implants backdoors across diverse model architectures and effectively evades state-of-the-art defense mechanisms.
π Abstract
Self-supervised diffusion models learn high-quality visual representations via latent space denoising. However, their representation layer poses a distinct threat: unlike traditional attacks targeting generative outputs, its unconstrained latent semantic space allows for stealthy backdoors, permitting malicious control upon triggering. In this paper, we propose BadRSSD, the first backdoor attack targeting the representation layer of self-supervised diffusion models. Specifically, it hijacks the semantic representations of poisoned samples with triggers in Principal Component Analysis (PCA) space toward those of a target image, then controls the denoising trajectory during diffusion by applying coordinated constraints across latent, pixel, and feature distribution spaces to steer the model toward generating the specified target. Additionally, we integrate representation dispersion regularization into the constraint framework to maintain feature space uniformity, significantly enhancing attack stealth. This approach preserves normal model functionality (high utility) while achieving precise target generation upon trigger activation (high specificity). Experiments on multiple benchmark datasets demonstrate that BadRSSD substantially outperforms existing attacks in both FID and MSE metrics, reliably establishing backdoors across different architectures and configurations, and effectively resisting state-of-the-art backdoor defenses.