🤖 AI Summary
This work addresses key limitations of existing lip-sync methods—namely, reliance on GANs or diffusion models, explicit masking requirements, and high inference latency—by proposing a mask-free, purely reconstruction-based real-time approach. The method operates in two stages: (1) a single-step latent-space reconstruction conditioned on identity reference, target frame, and lip pose vector; and (2) a flow-matching audio-to-pose Transformer for fine-grained lip motion prediction. Key innovations include self-supervised pseudo-ground-truth generation and mask distillation, enabling end-to-end lip localization and editing while achieving disentangled modeling of identity and pose. Evaluated on a single GPU, the framework achieves >100 FPS inference speed, matching the visual quality of state-of-the-art large models, with significantly improved stability and computational efficiency.
📝 Abstract
We present FlashLips, a two-stage, mask-free lip-sync system that decouples lips control from rendering and achieves real-time performance running at over 100 FPS on a single GPU, while matching the visual quality of larger state-of-the-art models. Stage 1 is a compact, one-step latent-space editor that reconstructs an image using a reference identity, a masked target frame, and a low-dimensional lips-pose vector, trained purely with reconstruction losses - no GANs or diffusion. To remove explicit masks at inference, we use self-supervision: we generate mouth-altered variants of the target image, that serve as pseudo ground truth for fine-tuning, teaching the network to localize edits to the lips while preserving the rest. Stage 2 is an audio-to-pose transformer trained with a flow-matching objective to predict lips-poses vectors from speech. Together, these stages form a simple and stable pipeline that combines deterministic reconstruction with robust audio control, delivering high perceptual quality and faster-than-real-time speed.