DreamID: High-Fidelity and Fast diffusion-based Face Swapping via Triplet ID Group Learning

📅 2025-04-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of simultaneously achieving high fidelity, fast inference, and identity consistency in diffusion-based face swapping. We propose DreamID, a novel framework designed to resolve this trade-off. Methodologically, DreamID introduces: (1) a Triplet ID Group—a first-of-its-kind explicit supervision mechanism that jointly optimizes identity similarity, attribute preservation, and image quality; (2) single-step diffusion training via coupling with SD Turbo, integrated with a multi-branch architecture (SwapNet/FaceNet/ID Adapter) and end-to-end pixel-level losses; and (3) fine-grained attribute customization (e.g., eyewear, facial shape) via lightweight fine-tuning. At 512×512 resolution, DreamID achieves inference in just 0.6 seconds. Quantitatively, it surpasses state-of-the-art methods across all key metrics—identity similarity, pose/expression preservation, and perceptual fidelity—while demonstrating strong robustness under challenging conditions including harsh lighting, large pose variations, and occlusions.

Technology Category

Application Category

📝 Abstract
In this paper, we introduce DreamID, a diffusion-based face swapping model that achieves high levels of ID similarity, attribute preservation, image fidelity, and fast inference speed. Unlike the typical face swapping training process, which often relies on implicit supervision and struggles to achieve satisfactory results. DreamID establishes explicit supervision for face swapping by constructing Triplet ID Group data, significantly enhancing identity similarity and attribute preservation. The iterative nature of diffusion models poses challenges for utilizing efficient image-space loss functions, as performing time-consuming multi-step sampling to obtain the generated image during training is impractical. To address this issue, we leverage the accelerated diffusion model SD Turbo, reducing the inference steps to a single iteration, enabling efficient pixel-level end-to-end training with explicit Triplet ID Group supervision. Additionally, we propose an improved diffusion-based model architecture comprising SwapNet, FaceNet, and ID Adapter. This robust architecture fully unlocks the power of the Triplet ID Group explicit supervision. Finally, to further extend our method, we explicitly modify the Triplet ID Group data during training to fine-tune and preserve specific attributes, such as glasses and face shape. Extensive experiments demonstrate that DreamID outperforms state-of-the-art methods in terms of identity similarity, pose and expression preservation, and image fidelity. Overall, DreamID achieves high-quality face swapping results at 512*512 resolution in just 0.6 seconds and performs exceptionally well in challenging scenarios such as complex lighting, large angles, and occlusions.
Problem

Research questions and friction points this paper is trying to address.

Achieves high-fidelity face swapping with fast inference speed
Enhances identity similarity via Triplet ID Group explicit supervision
Preserves specific attributes like glasses and face shape
Innovation

Methods, ideas, or system contributions that make the work stand out.

Triplet ID Group explicit supervision
SD Turbo for single-step inference
SwapNet, FaceNet, ID Adapter architecture
Fulong Ye
Fulong Ye
ByteDance
Vision-Language PretrainGenerative modelsDiffusion Models
Miao Hua
Miao Hua
ByteDance Inc.
Computer Vision
P
Pengze Zhang
Intelligent Creation Team, ByteDance
X
Xinghui Li
Intelligent Creation Team, ByteDance
Q
Qichao Sun
Intelligent Creation Team, ByteDance
S
Songtao Zhao
Intelligent Creation Team, ByteDance
Qian He
Qian He
ByteDance
Xinglong Wu
Xinglong Wu
字节跳动算法工程师
人工智能