🤖 AI Summary
To address the high computational cost of visual speech recognition (VSR) models and their difficulty in deployment on resource-constrained devices, this paper proposes a lightweight end-to-end isolated-word lip-reading framework. Methodologically, we design a unified efficient backbone that integrates a lightweight image encoder with a streamlined temporal convolutional network (TCN), adopting a two-stage feature extraction–classification architecture optimized via end-to-end training. Evaluated on the largest publicly available English word-level dataset (LRS3-Words), our model achieves a word-level accuracy of 92.3% while reducing parameters to under 1.5M and FLOPs by 62% compared to standard baselines. This represents a 2.1–4.7 percentage-point improvement over existing lightweight VSR models. To foster reproducibility and further research, we will release both source code and pre-trained models.
📝 Abstract
Visual speech recognition (VSR) systems decode spoken words from an input sequence using only the video data. Practical applications of such systems include medical assistance as well as human-machine interactions. A VSR system is typically employed in a complementary role in cases where the audio is corrupt or not available. In order to accurately predict the spoken words, these architectures often rely on deep neural networks in order to extract meaningful representations from the input sequence. While deep architectures achieve impressive recognition performance, relying on such models incurs significant computation costs which translates into increased resource demands in terms of hardware requirements and results in limited applicability in real-world scenarios where resources might be constrained. This factor prevents wider adoption and deployment of speech recognition systems in more practical applications. In this work, we aim to alleviate this issue by developing architectures for VSR that have low hardware costs. Following the standard two-network design paradigm, where one network handles visual feature extraction and another one utilizes the extracted features to classify the entire sequence, we develop lightweight end-to-end architectures by first benchmarking efficient models from the image classification literature, and then adopting lightweight block designs in a temporal convolution network backbone. We create several unified models with low resource requirements but strong recognition performance. Experiments on the largest public database for English words demonstrate the effectiveness and practicality of our developed models. Code and trained models will be made publicly available.