🤖 AI Summary
This work addresses the challenge of simultaneously achieving high fidelity, fast inference, and identity consistency in diffusion-based face swapping. We propose DreamID, a novel framework designed to resolve this trade-off. Methodologically, DreamID introduces: (1) a Triplet ID Group—a first-of-its-kind explicit supervision mechanism that jointly optimizes identity similarity, attribute preservation, and image quality; (2) single-step diffusion training via coupling with SD Turbo, integrated with a multi-branch architecture (SwapNet/FaceNet/ID Adapter) and end-to-end pixel-level losses; and (3) fine-grained attribute customization (e.g., eyewear, facial shape) via lightweight fine-tuning. At 512×512 resolution, DreamID achieves inference in just 0.6 seconds. Quantitatively, it surpasses state-of-the-art methods across all key metrics—identity similarity, pose/expression preservation, and perceptual fidelity—while demonstrating strong robustness under challenging conditions including harsh lighting, large pose variations, and occlusions.
📝 Abstract
In this paper, we introduce DreamID, a diffusion-based face swapping model that achieves high levels of ID similarity, attribute preservation, image fidelity, and fast inference speed. Unlike the typical face swapping training process, which often relies on implicit supervision and struggles to achieve satisfactory results. DreamID establishes explicit supervision for face swapping by constructing Triplet ID Group data, significantly enhancing identity similarity and attribute preservation. The iterative nature of diffusion models poses challenges for utilizing efficient image-space loss functions, as performing time-consuming multi-step sampling to obtain the generated image during training is impractical. To address this issue, we leverage the accelerated diffusion model SD Turbo, reducing the inference steps to a single iteration, enabling efficient pixel-level end-to-end training with explicit Triplet ID Group supervision. Additionally, we propose an improved diffusion-based model architecture comprising SwapNet, FaceNet, and ID Adapter. This robust architecture fully unlocks the power of the Triplet ID Group explicit supervision. Finally, to further extend our method, we explicitly modify the Triplet ID Group data during training to fine-tune and preserve specific attributes, such as glasses and face shape. Extensive experiments demonstrate that DreamID outperforms state-of-the-art methods in terms of identity similarity, pose and expression preservation, and image fidelity. Overall, DreamID achieves high-quality face swapping results at 512*512 resolution in just 0.6 seconds and performs exceptionally well in challenging scenarios such as complex lighting, large angles, and occlusions.