π€ AI Summary
This work addresses the privacy risk that personalized image generation, even when authorized, can still be linked to real-world identities through facial recognition after public sharing. To mitigate this, the authors propose a model-side identity disentanglement mechanism that substantially reduces the identifiability of generated images while preserving high-quality personalization capabilities. The approach employs an alternating optimization strategy that integrates short-term fine-tuning with identity-disentangled data training, complemented by a two-stage scheduling scheme to jointly optimize generation fidelity and privacy protection. Extensive experiments demonstrate the methodβs effectiveness across diverse datasets, textual prompts, and state-of-the-art face recognition systems. Furthermore, it enables users to specify desired privacy levels, achieving a tunable trade-off between utility and identity privacy.
π Abstract
Personalized text-to-image diffusion models (e.g., DreamBooth, LoRA) enable users to synthesize high-fidelity avatars from a few reference photos for social expression. However, once these generations are shared on social media platforms (e.g., Instagram, Facebook), they can be linked to the real user via face recognition systems, enabling identity tracking and profiling. Existing defenses mainly follow an anti-personalization strategy that protects publicly released reference photos by disrupting model fine-tuning. While effective against unauthorized personalization, they do not address another practical setting in which personalization is authorized, but the resulting public outputs still leak identity information.
To address this problem, we introduce a new defense setting, termed model-side output immunization, whose goal is to produce a personalized model that supports authorized personalization while reducing the identity linkability of public generations, with tunable control over the privacy-utility trade-off to accommodate diverse privacy needs. To this end, we propose Identity-Decoupled personalized Diffusion Models (IDDM), a model-side defense that integrates identity decoupling into the personalization pipeline. Concretely, IDDM follows an alternating procedure that interleaves short personalization updates with identity-decoupled data optimization, using a two-stage schedule to balance identity linkability suppression and generation utility. Extensive experiments across multiple datasets, diverse prompts, and state-of-the-art face recognition systems show that IDDM consistently reduces identity linkability while preserving high-quality personalized generation.