🤖 AI Summary
Video-based hair transfer faces challenges including temporal inconsistency, low spatial fidelity, and poor dynamic adaptability. This paper proposes a two-stage “anchor-frame-guided + animation-generation” framework: first, an Image Hair Transfer module achieves high-fidelity single-frame hairstyle transfer; second, a multi-scale gated SPADE decoder, augmented with semantic-aware modulation, explicitly models spatiotemporal coherence and geometric alignment within hair regions. To our knowledge, this is the first work to introduce anchor-frame guidance into video-level hair transfer—ensuring all non-hair regions remain strictly unchanged while significantly improving inter-frame coherence and fine-grained detail reconstruction. Our method achieves state-of-the-art performance across multiple benchmarks, supports diverse hairstyle transfers, and delivers superior visual quality and temporal stability. Code will be publicly released.
📝 Abstract
Hair transfer is increasingly valuable across domains such as social media, gaming, advertising, and entertainment. While significant progress has been made in single-image hair transfer, video-based hair transfer remains challenging due to the need for temporal consistency, spatial fidelity, and dynamic adaptability. In this work, we propose HairShifter, a novel "Anchor Frame + Animation" framework that unifies high-quality image hair transfer with smooth and coherent video animation. At its core, HairShifter integrates a Image Hair Transfer (IHT) module for precise per-frame transformation and a Multi-Scale Gated SPADE Decoder to ensure seamless spatial blending and temporal coherence. Our method maintains hairstyle fidelity across frames while preserving non-hair regions. Extensive experiments demonstrate that HairShifter achieves state-of-the-art performance in video hairstyle transfer, combining superior visual quality, temporal consistency, and scalability. The code will be publicly available. We believe this work will open new avenues for video-based hairstyle transfer and establish a robust baseline in this field.