Exploiting Temporal Audio-Visual Correlation Embedding for Audio-Driven One-Shot Talking Head Animation

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The core challenge of the Audio-Driven Talking Head Animation (ADOS-THA) task lies in modeling subtle, easily overlooked motion variations between adjacent frames. To address this, we propose a Temporal Audio-Visual Association Embedding framework, which introduces— for the first time—the temporal audio-visual association metric and alignment mechanism; it leverages temporal relationships among audio segments as implicit supervision for visual generation. We further incorporate channel-attention-guided feature enhancement and jointly optimize the model via temporal audio-visual contrastive learning and cross-modal alignment loss. Evaluated on HDTF, LRW, and VoxCeleb1/2 benchmarks, our method achieves significant improvements over state-of-the-art approaches, particularly in lip-sync accuracy and facial micro-expression naturalness. These results empirically validate the effectiveness of exploiting intrinsic temporal cross-modal correlations to model inter-frame dynamics.

Technology Category

Application Category

📝 Abstract
The paramount challenge in audio-driven One-shot Talking Head Animation (ADOS-THA) lies in capturing subtle imperceptible changes between adjacent video frames. Inherently, the temporal relationship of adjacent audio clips is highly correlated with that of the corresponding adjacent video frames, offering supplementary information that can be pivotal for guiding and supervising talking head animations. In this work, we propose to learn audio-visual correlations and integrate the correlations to help enhance feature representation and regularize final generation by a novel Temporal Audio-Visual Correlation Embedding (TAVCE) framework. Specifically, it first learns an audio-visual temporal correlation metric, ensuring the temporal audio relationships of adjacent clips are aligned with the temporal visual relationships of corresponding adjacent video frames. Since the temporal audio relationship contains aligned information about the visual frame, we first integrate it to guide learning more representative features via a simple yet effective channel attention mechanism. During training, we also use the alignment correlations as an additional objective to supervise generating visual frames. We conduct extensive experiments on several publicly available benchmarks (i.e., HDTF, LRW, VoxCeleb1, and VoxCeleb2) to demonstrate its superiority over existing leading algorithms.
Problem

Research questions and friction points this paper is trying to address.

Capturing subtle changes between adjacent video frames
Aligning temporal audio-visual correlations for animation
Enhancing feature representation with audio-visual guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learn audio-visual temporal correlation metric
Integrate correlation via channel attention mechanism
Use alignment correlations as training objective
🔎 Similar Papers
No similar papers found.
Zhihua Xu
Zhihua Xu
Guangdong University of Technology
CVAIGCMLLM
T
Tianshui Chen
School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, China
Z
Zhijing Yang
School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, China
S
Siyuan Peng
School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, China
K
Keze Wang
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China
Liang Lin
Liang Lin
Fellow of IEEE/IAPR, Professor of Computer Science, Sun Yat-sen University
Embodied AICausal Inference and LearningMultimodal Data Analysis