🤖 AI Summary
To mitigate privacy risks arising from the misuse of users’ publicly shared images by vision-language pre-trained (VLP) models, this paper proposes a compression-domain privacy-preserving framework. The method embeds a conditional implicit trigger decoding mechanism during image encoding, generating bitstreams with multiple decoding paths: default decoding preserves high visual fidelity—achieving PSNR and MS-SSIM comparable to leading learned image compression (LIC) baselines—while substantially degrading VLP semantic understanding; full semantic recovery is enabled only upon activation via a specific cryptographic key or condition. The framework integrates Conditional Latent Trigger Generation (CLTG), Uncertainty-Aware Encryption Optimization (UAEO), and adaptive multi-objective joint training, enabling plug-and-play integration into mainstream learned compression models. Experiments demonstrate over 72% reduction in semantic recognition accuracy under CLIP- and BLIP-based VLP attacks, while maintaining full compatibility with downstream vision tasks.
📝 Abstract
The improved semantic understanding of vision-language pretrained (VLP) models has made it increasingly difficult to protect publicly posted images from being exploited by search engines and other similar tools. In this context, this paper seeks to protect users' privacy by implementing defenses at the image compression stage to prevent exploitation. Specifically, we propose a flexible coding method, termed Privacy-Shielded Image Compression (PSIC), that can produce bitstreams with multiple decoding options. By default, the bitstream is decoded to preserve satisfactory perceptual quality while preventing interpretation by VLP models. Our method also retains the original image compression functionality. With a customizable input condition, the proposed scheme can reconstruct the image that preserves its full semantic information. A Conditional Latent Trigger Generation (CLTG) module is proposed to produce bias information based on customizable conditions to guide the decoding process into different reconstructed versions, and an Uncertainty-Aware Encryption-Oriented (UAEO) optimization function is designed to leverage the soft labels inferred from the target VLP model's uncertainty on the training data. This paper further incorporates an adaptive multi-objective optimization strategy to obtain improved encrypting performance and perceptual quality simultaneously within a unified training process. The proposed scheme is plug-and-play and can be seamlessly integrated into most existing Learned Image Compression (LIC) models. Extensive experiments across multiple downstream tasks have demonstrated the effectiveness of our design.