🤖 AI Summary
Existing face attribute editing methods suffer from three key limitations: (1) difficulty in precisely controlling single attributes, (2) lack of fine-grained control over editing intensity, and (3) attribute entanglement leading to distortion of non-target facial features. To address these issues, we propose a generative editing framework based on orthogonal latent space disentanglement. Our method introduces, for the first time in learnable generative models, an orthogonalized latent direction learning mechanism, integrated with an explicit disentanglement-aware loss and an attention-enhanced encoder-decoder architecture to yield semantically clear and mutually orthogonal attribute representations. By performing latent factor decomposition and imposing orthogonality constraints during optimization, we significantly improve both the independence and intensity controllability of attribute manipulation. Extensive experiments on CelebA-HQ and FFHQ demonstrate that our approach outperforms state-of-the-art methods in editing accuracy, attribute disentanglement, and image fidelity, validating its effectiveness and generalizability.
📝 Abstract
We propose an image-to-image translation framework for facial attribute editing with disentangled interpretable latent directions. Facial attribute editing task faces the challenges of targeted attribute editing with controllable strength and disentanglement in the representations of attributes to preserve the other attributes during edits. For this goal, inspired by the latent space factorization works of fixed pretrained GANs, we design the attribute editing by latent space factorization, and for each attribute, we learn a linear direction that is orthogonal to the others. We train these directions with orthogonality constraints and disentanglement losses. To project images to semantically organized latent spaces, we set an encoder-decoder architecture with attention-based skip connections. We extensively compare with previous image translation algorithms and editing with pretrained GAN works. Our extensive experiments show that our method significantly improves over the state-of-the-arts.