LipGen: Viseme-Guided Lip Video Generation for Enhancing Visual Speech Recognition

📅 2025-01-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor generalization and robustness of visual speech recognition (VSR) models in real-world scenarios—caused by insufficient training data diversity—this paper proposes LipGen, a speech-driven lip video synthesis framework. Methodologically, it introduces the first viseme-guided lip video generation paradigm, leveraging diffusion models and GANs for high-fidelity speech-to-lip video synthesis; designs a viseme-aware attention mechanism to achieve fine-grained, phoneme-unit-level temporal alignment; and incorporates joint auxiliary tasks of viseme classification and attention prediction to strengthen temporal modeling. Evaluated on the LRW dataset, LipGen surpasses state-of-the-art methods and demonstrates significant performance gains under challenging conditions—including occlusion, low resolution, and large head poses—validating that viseme-aware synthetic data substantially enhances robust lip reading.

Technology Category

Application Category

📝 Abstract
Visual speech recognition (VSR), commonly known as lip reading, has garnered significant attention due to its wide-ranging practical applications. The advent of deep learning techniques and advancements in hardware capabilities have significantly enhanced the performance of lip reading models. Despite these advancements, existing datasets predominantly feature stable video recordings with limited variability in lip movements. This limitation results in models that are highly sensitive to variations encountered in real-world scenarios. To address this issue, we propose a novel framework, LipGen, which aims to improve model robustness by leveraging speech-driven synthetic visual data, thereby mitigating the constraints of current datasets. Additionally, we introduce an auxiliary task that incorporates viseme classification alongside attention mechanisms. This approach facilitates the efficient integration of temporal information, directing the model's focus toward the relevant segments of speech, thereby enhancing discriminative capabilities. Our method demonstrates superior performance compared to the current state-of-the-art on the lip reading in the wild (LRW) dataset and exhibits even more pronounced advantages under challenging conditions.
Problem

Research questions and friction points this paper is trying to address.

Visual Speech Recognition
Model Generalization
Diverse Training Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

LipGen
Robustness Enhancement
Auxiliary Task Learning
🔎 Similar Papers
No similar papers found.
Bowen Hao
Bowen Hao
Renmin University of China
data miningrecommender systemnatural language processing
D
Dongliang Zhou
Harbin Institute of Technology, Shenzhen, China
X
Xiaojie Li
Harbin Institute of Technology, Shenzhen, China
Xingyu Zhang
Xingyu Zhang
Horizon Robotics Inc
NLP&VLM&AD
Liang Xie
Liang Xie
Wuhan University of Technology
Time Series ForecastingCross-modal Learning
Jianlong Wu
Jianlong Wu
Professor, Harbin Institute of Technology (Shenzhen)
Computer VisionMultimodal Learning
E
Erwei Yin
National Institute of Defense Technology Innovation, Academy of Military Sciences, Beijing, China; Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin, China