DynamicLip: Shape-Independent Continuous Authentication via Lip Articulator Dynamics

📅 2025-01-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing lip-based biometric methods overly rely on static lip shape, rendering them vulnerable to dynamic deformations during speech and necessitating full-face image acquisition—posing significant privacy risks. This paper addresses high-privacy-security scenarios by proposing a continuous authentication method that eschews static lip geometry entirely, instead leveraging only the dynamic articulatory motion patterns of the lips. We introduce, for the first time, a shape-invariant lip dynamics representation, integrating spatio-temporal keypoint modeling, articulator-specific motion analysis, a lightweight temporal neural network, and adversarial robust training. Evaluated on a 50-subject dataset, our method achieves 99.06% authentication accuracy. It demonstrates strong robustness against both AI-generated deepfakes and expert-level lip-reading impersonation attacks. The approach enables truly continuous, spoof-resistant authentication with minimal privacy overhead—requiring only localized lip-region video, not full-face imagery.

Technology Category

Application Category

📝 Abstract
Biometrics authentication has become increasingly popular due to its security and convenience; however, traditional biometrics are becoming less desirable in scenarios such as new mobile devices, Virtual Reality, and Smart Vehicles. For example, while face authentication is widely used, it suffers from significant privacy concerns. The collection of complete facial data makes it less desirable for privacy-sensitive applications. Lip authentication, on the other hand, has emerged as a promising biometrics method. However, existing lip-based authentication methods heavily depend on static lip shape when the mouth is closed, which can be less robust due to lip shape dynamic motion and can barely work when the user is speaking. In this paper, we revisit the nature of lip biometrics and extract shape-independent features from the lips. We study the dynamic characteristics of lip biometrics based on articulator motion. Building on the knowledge, we propose a system for shape-independent continuous authentication via lip articulator dynamics. This system enables robust, shape-independent and continuous authentication, making it particularly suitable for scenarios with high security and privacy requirements. We conducted comprehensive experiments in different environments and attack scenarios and collected a dataset of 50 subjects. The results indicate that our system achieves an overall accuracy of 99.06% and demonstrates robustness under advanced mimic attacks and AI deepfake attacks, making it a viable solution for continuous biometric authentication in various applications.
Problem

Research questions and friction points this paper is trying to address.

Lip Biometrics
Privacy Concerns
Dynamic Lip Movement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lip Motion Dynamics
Continuous Authentication
Robustness against AI Spoofing
🔎 Similar Papers
No similar papers found.
Huashan Chen
Huashan Chen
Institute of Information Engineering, Chinese Academy of Sciences
Cybersecurity MetricsBiometric AuthenticationVR/AR Security & Privacy
Y
Yifan Xu
Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100085, China, and School of Cyber Security, University of Chinese Academy of Sciences, Beijing 101408, China
Y
Yue Feng
Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100085, China, and School of Cyber Security, University of Chinese Academy of Sciences, Beijing 101408, China
M
Ming Jian
Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100085, China
F
Feng Liu
Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100085, China, and School of Cyber Security, University of Chinese Academy of Sciences, Beijing 101408, China
P
Pengfei Hu
School of Computer Science and Technology, Shandong University, Qingdao 266237, China
Kebin Peng
Kebin Peng
East Carolina University
3D Computer VisionMachine LearningSoftware Engineering
S
Sen He
Department of Systems and Industrial Engineering, University of Arizona, Tucson 85718, USA
Z
Zi Wang
Department of Computer & Cyber Sciences, Augusta University, Augusta 30912, USA