🤖 AI Summary
This work addresses the challenge of maintaining lip-sync fidelity in facial expression manipulation by proposing a Temporal-aware Hierarchical Facial Expression Manipulation (THFEM) framework. THFEM is the first to adapt audio-driven talking head generation (AD-THG) models for expression editing tasks. By integrating a Speech-Preserving Facial Expression Manipulation (SPFEM) strategy with a temporal prior learning mechanism that leverages information from adjacent frames, the method enables precise control over facial expressions while effectively preserving the original lip movements dictated by speech. Experimental results demonstrate that THFEM significantly improves lip-sync accuracy, temporal consistency, and visual realism in generated videos, achieving high-fidelity facial expression manipulation with coherent mouth movements.
📝 Abstract
Speech-Preserving Facial Expression Manipulation (SPFEM) is an innovative technique aimed at altering facial expressions in images and videos while retaining the original mouth movements. Despite advancements, SPFEM still struggles with accurate lip synchronization due to the complex interplay between facial expressions and mouth shapes. Capitalizing on the advanced capabilities of audio-driven talking head generation (AD-THG) models in synthesizing precise lip movements, our research introduces a novel integration of these models with SPFEM. We present a new framework, Talking Head Facial Expression Manipulation (THFEM), which utilizes AD-THG models to generate frames with accurately synchronized lip movements from audio inputs and SPFEM-altered images. However, increasing the number of frames generated by AD-THG models tends to compromise the realism and expression fidelity of the images. To counter this, we develop an adjacent frame learning strategy that finetunes AD-THG models to predict sequences of consecutive frames. This strategy enables the models to incorporate information from neighboring frames, significantly improving image quality during testing. Our extensive experimental evaluations demonstrate that this framework effectively preserves mouth shapes during expression manipulations, highlighting the substantial benefits of integrating AD-THG with SPFEM.