Exploring Talking Head Models With Adjacent Frame Prior for Speech-Preserving Facial Expression Manipulation

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of maintaining lip-sync fidelity in facial expression manipulation by proposing a Temporal-aware Hierarchical Facial Expression Manipulation (THFEM) framework. THFEM is the first to adapt audio-driven talking head generation (AD-THG) models for expression editing tasks. By integrating a Speech-Preserving Facial Expression Manipulation (SPFEM) strategy with a temporal prior learning mechanism that leverages information from adjacent frames, the method enables precise control over facial expressions while effectively preserving the original lip movements dictated by speech. Experimental results demonstrate that THFEM significantly improves lip-sync accuracy, temporal consistency, and visual realism in generated videos, achieving high-fidelity facial expression manipulation with coherent mouth movements.

Technology Category

Application Category

📝 Abstract
Speech-Preserving Facial Expression Manipulation (SPFEM) is an innovative technique aimed at altering facial expressions in images and videos while retaining the original mouth movements. Despite advancements, SPFEM still struggles with accurate lip synchronization due to the complex interplay between facial expressions and mouth shapes. Capitalizing on the advanced capabilities of audio-driven talking head generation (AD-THG) models in synthesizing precise lip movements, our research introduces a novel integration of these models with SPFEM. We present a new framework, Talking Head Facial Expression Manipulation (THFEM), which utilizes AD-THG models to generate frames with accurately synchronized lip movements from audio inputs and SPFEM-altered images. However, increasing the number of frames generated by AD-THG models tends to compromise the realism and expression fidelity of the images. To counter this, we develop an adjacent frame learning strategy that finetunes AD-THG models to predict sequences of consecutive frames. This strategy enables the models to incorporate information from neighboring frames, significantly improving image quality during testing. Our extensive experimental evaluations demonstrate that this framework effectively preserves mouth shapes during expression manipulations, highlighting the substantial benefits of integrating AD-THG with SPFEM.
Problem

Research questions and friction points this paper is trying to address.

Speech-Preserving Facial Expression Manipulation
lip synchronization
talking head
facial expression manipulation
mouth movements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Talking Head Generation
Facial Expression Manipulation
Lip Synchronization
Adjacent Frame Learning
Speech-Preserving
🔎 Similar Papers
No similar papers found.
Z
Zhenxuan Lu
Guangdong University of Technology, China
Zhihua Xu
Zhihua Xu
Guangdong University of Technology
CVAIGCMLLM
Z
Zhijing Yang
Guangdong University of Technology, China
Feng Gao
Feng Gao
Ocean University of China
Hyperspectral image processingArtificial Intelligence Oceanography
Y
Yongyi Lu
Guangdong University of Technology, China
K
Keze Wang
Sun Yat-sen University, China
T
Tianshui Chen
Guangdong University of Technology, China