READ-Net: Clarifying Emotional Ambiguity via Adaptive Feature Recalibration for Audio-Visual Depression Detection

📅 2026-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audiovisual depression detection methods often conflate transient emotional states with stable depressive symptoms, leading to affective ambiguity and reduced accuracy. To address this issue, this work proposes READ-Net, a novel framework that explicitly targets affective ambiguity through an Adaptive Feature Recalibration (AFR) mechanism. AFR dynamically adjusts the weights of multimodal affective features, preserving depression-relevant cues while suppressing irrelevant emotional interference. The proposed module is designed to be flexibly integrated into existing architectures and demonstrates consistent performance gains across three public datasets, achieving an average improvement of 4.55% in accuracy and 1.26% in F1 score over state-of-the-art methods. These results underscore the robustness and effectiveness of READ-Net in mitigating emotion-induced confounds in depression detection.

Technology Category

Application Category

📝 Abstract
Depression is a severe global mental health issue that impairs daily functioning and overall quality of life. Although recent audio-visual approaches have improved automatic depression detection, methods that ignore emotional cues often fail to capture subtle depressive signals hidden within emotional expressions. Conversely, those incorporating emotions frequently confuse transient emotional expressions with stable depressive symptoms in feature representations, a phenomenon termed \emph{Emotional Ambiguity}, thereby leading to detection errors. To address this critical issue, we propose READ-Net, the first audio-visual depression detection framework explicitly designed to resolve Emotional Ambiguity through Adaptive Feature Recalibration (AFR). The core insight of AFR is to dynamically adjust the weights of emotional features to enhance depression-related signals. Rather than merely overlooking or naively combining emotional information, READ-Net innovatively identifies and preserves depressive-relevant cues within emotional features, while adaptively filtering out irrelevant emotional noise. This recalibration strategy significantly clarifies feature representations, and effectively mitigates the persistent challenge of emotional interference. Additionally, READ-Net can be easily integrated into existing frameworks for improved performance. Extensive evaluations on three publicly available datasets show that READ-Net outperforms state-of-the-art methods, with average gains of 4.55\% in accuracy and 1.26\% in F1-score, demonstrating its robustness to emotional disturbances and improving audio-visual depression detection.
Problem

Research questions and friction points this paper is trying to address.

Emotional Ambiguity
Depression Detection
Audio-Visual Analysis
Feature Representation
Emotional Interference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Emotional Ambiguity
Adaptive Feature Recalibration
Audio-Visual Depression Detection
Feature Recalibration
Depression-Related Cues
🔎 Similar Papers
No similar papers found.
C
Chenglizhao Chen
College of Computer Science and Technology, China University of Petroleum (East China), China
B
Boze Li
College of Computer Science and Technology, China University of Petroleum (East China), China
M
Mengke Song
College of Computer Science and Technology, China University of Petroleum (East China), China
D
Dehao Feng
College of Computer Science and Technology, China University of Petroleum (East China), China
Xinyu Liu
Xinyu Liu
Harbin Institute of Technology, China
Biped walking robotAutomationControlMechatronics
Shanchen Pang
Shanchen Pang
China University of Petroleum
AIPetri NetCloud ComputingEdge Computing
Jufeng Yang
Jufeng Yang
Nankai University
Computer visionMachine learningMultimedia
Hui Yu
Hui Yu
Professor of Visual and Cognitive Computing, University of Glasgow
Visual ComputingCognitive ComputingSocial RobotParallel Intelligence