FaceCrafter: Identity-Conditional Diffusion with Disentangled Control over Facial Pose, Expression, and Emotion

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of fine-grained disentanglement and independent control of non-identity attributes—namely pose, expression, and emotion—in identity-conditioned face generation. We propose a lightweight dual-control framework built upon diffusion models, wherein dedicated pose, expression, and emotion control modules are embedded into the cross-attention layers. To enforce explicit separation between identity representations and non-identity factors, we introduce a feature orthogonality constraint during training. Compared to existing approaches, our method achieves superior identity fidelity while significantly improving attribute control accuracy and generation diversity. Quantitative evaluation and user studies demonstrate consistent superiority across all key metrics: +12.3% in control accuracy, +9.7% in identity preservation, and higher perceptual naturalness.

Technology Category

Application Category

📝 Abstract
Human facial images encode a rich spectrum of information, encompassing both stable identity-related traits and mutable attributes such as pose, expression, and emotion. While recent advances in image generation have enabled high-quality identity-conditional face synthesis, precise control over non-identity attributes remains challenging, and disentangling identity from these mutable factors is particularly difficult. To address these limitations, we propose a novel identity-conditional diffusion model that introduces two lightweight control modules designed to independently manipulate facial pose, expression, and emotion without compromising identity preservation. These modules are embedded within the cross-attention layers of the base diffusion model, enabling precise attribute control with minimal parameter overhead. Furthermore, our tailored training strategy, which leverages cross-attention between the identity feature and each non-identity control feature, encourages identity features to remain orthogonal to control signals, enhancing controllability and diversity. Quantitative and qualitative evaluations, along with perceptual user studies, demonstrate that our method surpasses existing approaches in terms of control accuracy over pose, expression, and emotion, while also improving generative diversity under identity-only conditioning.
Problem

Research questions and friction points this paper is trying to address.

Precise control over facial pose, expression, and emotion in synthesis
Disentangling identity from mutable facial attributes effectively
Enhancing generative diversity while preserving identity in face generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight modules for independent facial attribute control
Cross-attention layers enhance identity and attribute separation
Tailored training strategy improves controllability and diversity
🔎 Similar Papers
No similar papers found.