MuDD: A Multimodal Deception Detection Dataset and GSR-Guided Progressive Distillation for Non-Contact Deception Detection

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of generalizing non-contact deception detection due to substantial inter-individual variability in visual and auditory cues. To this end, the authors construct MuDD, a large-scale multimodal deception dataset, and propose a GSR-guided progressive distillation framework that, for the first time, integrates feature-level and decision-level knowledge distillation. The framework incorporates a dynamic routing mechanism to enable efficient cross-modal transfer from contact-based physiological signals—namely galvanic skin response (GSR), photoplethysmography (PPG), and heart rate—to non-contact modalities. By further leveraging multimodal fusion and personality trait analysis, the proposed method achieves state-of-the-art performance on both deception detection and concealed digit recognition tasks, significantly enhancing model stability and generalization capability.
📝 Abstract
Non-contact automatic deception detection remains challenging because visual and auditory deception cues often lack stable cross-subject patterns. In contrast, galvanic skin response (GSR) provides more reliable physiological cues and has been widely used in contact-based deception detection. In this work, we leverage stable deception-related knowledge in GSR to guide representation learning in non-contact modalities through cross-modal knowledge distillation. A key obstacle, however, is the lack of a suitable dataset for this setting. To address this, we introduce MuDD, a large-scale Multimodal Deception Detection dataset containing recordings from 130 participants over 690 minutes. In addition to video, audio, and GSR, MuDD also provides Photoplethysmography, heart rate, and personality traits, supporting broader scientific studies of deception. Based on this dataset, we propose GSR-guided Progressive Distillation (GPD), a cross-modal distillation framework for mitigating the negative transfer caused by the large modality mismatch between GSR and non-contact signals. The core innovation of GPD is the integration of progressive feature-level and digit-level distillation with dynamic routing, which allows the model to adaptively determine how teacher knowledge should be transferred during training, leading to more stable cross-modal knowledge transfer. Extensive experiments and visualizations show that GPD outperforms existing methods and achieves state-of-the-art performance on both deception detection and concealed-digit identification.
Problem

Research questions and friction points this paper is trying to address.

deception detection
non-contact sensing
multimodal learning
cross-modal knowledge distillation
physiological signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-modal knowledge distillation
progressive distillation
dynamic routing
non-contact deception detection
multimodal dataset
🔎 Similar Papers
No similar papers found.
P
Peiyuan Jiang
School of Computer Science and Engineering, University of Electronic Science and Technology of China
Yao Liu
Yao Liu
Professor of Computer Science, University of South Florida
Computer and Network Security
Y
Yanglei Gan
School of Computer Science and Engineering, University of Electronic Science and Technology of China
J
Jiaye Yang
School of Computer Science and Engineering, University of Electronic Science and Technology of China
L
Lu Liu
School of Computer Science and Engineering, University of Electronic Science and Technology of China
D
Daibing Yao
Yizhou Prison, Sichuan Province
Q
Qiao Liu
School of Computer Science and Engineering, University of Electronic Science and Technology of China