🤖 AI Summary
Current instruction-driven face editing models face bottlenecks in fine-grained attribute control and identity preservation, especially under significant landmark deviations—such as extreme expressions, large pose variations, or estimation errors—which often cause identity distortion. To address this, we propose a diffusion Transformer-based framework for fine-grained face editing. Our method introduces a novel landmark tokenization mechanism coupled with position-mapping encoding to decouple geometric structure from appearance features; designs a vision-language-model-guided robust landmark predictor to enhance conditional reliability; and constructs HFL-150K, the first large-scale benchmark dedicated to fine-grained face editing. Extensive experiments demonstrate that our approach achieves state-of-the-art performance, improving identity preservation and semantic consistency by 7.8% and 4.6%, respectively, over prior methods.
📝 Abstract
Recent multimodal models for instruction-based face editing enable semantic manipulation but still struggle with precise attribute control and identity preservation. Structural facial representations such as landmarks are effective for intermediate supervision, yet most existing methods treat them as rigid geometric constraints, which can degrade identity when conditional landmarks deviate significantly from the source (e.g., large expression or pose changes, inaccurate landmark estimates). To address these limitations, we propose LaTo, a landmark-tokenized diffusion transformer for fine-grained, identity-preserving face editing. Our key innovations include: (1) a landmark tokenizer that directly quantizes raw landmark coordinates into discrete facial tokens, obviating the need for dense pixel-wise correspondence; (2) a location-mapping positional encoding that integrates facial and image tokens for unified processing, enabling flexible yet decoupled geometry-appearance interactions with high efficiency and strong identity preservation; and (3) a landmark predictor that leverages vision-language models to infer target landmarks from instructions and source images, whose structured chain-of-thought improves estimation accuracy and interactive control. To mitigate data scarcity, we curate HFL-150K, to our knowledge the largest benchmark for this task, containing over 150K real face pairs with fine-grained instructions. Extensive experiments show that LaTo outperforms state-of-the-art methods by 7.8% in identity preservation and 4.6% in semantic consistency. Code and dataset will be made publicly available upon acceptance.