Optimizing ID Consistency in Multimodal Large Models: Facial Restoration via Alignment, Entanglement, and Disentanglement

📅 2026-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of facial identity (ID) distortion in portrait editing with multimodal large models, which often arises from cross-source distribution shifts and feature contamination. To mitigate this, the authors propose EditedID, a novel plug-and-play framework that introduces an integrated “alignment–disentanglement–fusion” mechanism. By aligning cross-source latent representations, disentangling identity from non-identity features, and selectively fusing visual elements, EditedID preserves identity without requiring additional training. The method leverages an adaptive blending strategy, a hybrid solver, and attention gating to effectively exploit diffusion trajectories and sampler characteristics, supporting both single- and multi-person open-world scenarios. Experiments demonstrate that EditedID achieves state-of-the-art performance in maintaining ID fidelity while ensuring consistency with edited content, establishing a new benchmark for realistic portrait editing.

Technology Category

Application Category

📝 Abstract
Multimodal editing large models have demonstrated powerful editing capabilities across diverse tasks. However, a persistent and long-standing limitation is the decline in facial identity (ID) consistency during realistic portrait editing. Due to the human eye's high sensitivity to facial features, such inconsistency significantly hinders the practical deployment of these models. Current facial ID preservation methods struggle to achieve consistent restoration of both facial identity and edited element IP due to Cross-source Distribution Bias and Cross-source Feature Contamination. To address these issues, we propose EditedID, an Alignment-Disentanglement-Entanglement framework for robust identity-specific facial restoration. By systematically analyzing diffusion trajectories, sampler behaviors, and attention properties, we introduce three key components: 1) Adaptive mixing strategy that aligns cross-source latent representations throughout the diffusion process. 2) Hybrid solver that disentangles source-specific identity attributes and details. 3) Attentional gating mechanism that selectively entangles visual elements. Extensive experiments show that EditedID achieves state-of-the-art performance in preserving original facial ID and edited element IP consistency. As a training-free and plug-and-play solution, it establishes a new benchmark for practical and reliable single/multi-person facial identity restoration in open-world settings, paving the way for the deployment of multimodal editing large models in real-person editing scenarios. The code is available at https://github.com/NDYBSNDY/EditedID.
Problem

Research questions and friction points this paper is trying to address.

ID consistency
facial restoration
multimodal editing
cross-source distribution bias
feature contamination
Innovation

Methods, ideas, or system contributions that make the work stand out.

ID consistency
alignment-disentanglement-entanglement
facial restoration
multimodal large models
training-free
🔎 Similar Papers
No similar papers found.