PAIR-Net: Enhancing Egocentric Speaker Detection via Pretrained Audio-Visual Fusion and Alignment Loss

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
First-person videos suffer from severe degradation in active speaker detection (ASD) due to ego-centric viewpoint jitter, motion blur, and off-screen speech. To address this, we propose a robust cross-modal ASD method: freezing the Whisper audio encoder, fine-tuning the AV-HuBERT visual backbone, and introducing a novel end-to-end trainable cross-modal alignment loss that jointly enforces temporal synchronization and semantic alignment between audio and visual features. Our approach imposes no frontal-face assumption nor requires multi-speaker contextual modeling. Evaluated on the Ego4D ASD benchmark, it achieves 76.6% mAP—outperforming LoCoNet and STHG by +8.2% and +12.9%, respectively—setting a new state-of-the-art. The core contributions are threefold: (i) the first integration of frozen Whisper with fine-tuned AV-HuBERT in a unified optimization framework; (ii) a learnable cross-modal alignment mechanism explicitly designed for egocentric settings; and (iii) significantly enhanced robustness to domain-specific degradations inherent in first-person video.

Technology Category

Application Category

📝 Abstract
Active speaker detection (ASD) in egocentric videos presents unique challenges due to unstable viewpoints, motion blur, and off-screen speech sources - conditions under which traditional visual-centric methods degrade significantly. We introduce PAIR-Net (Pretrained Audio-Visual Integration with Regularization Network), an effective model that integrates a partially frozen Whisper audio encoder with a fine-tuned AV-HuBERT visual backbone to robustly fuse cross-modal cues. To counteract modality imbalance, we introduce an inter-modal alignment loss that synchronizes audio and visual representations, enabling more consistent convergence across modalities. Without relying on multi-speaker context or ideal frontal views, PAIR-Net achieves state-of-the-art performance on the Ego4D ASD benchmark with 76.6% mAP, surpassing LoCoNet and STHG by 8.2% and 12.9% mAP, respectively. Our results highlight the value of pretrained audio priors and alignment-based fusion for robust ASD under real-world egocentric conditions.
Problem

Research questions and friction points this paper is trying to address.

Detecting active speakers in egocentric videos with unstable viewpoints.
Improving audio-visual fusion for robust cross-modal cue integration.
Addressing modality imbalance via inter-modal alignment loss.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses Whisper audio and AV-HuBERT visual encoders
Uses inter-modal alignment loss for synchronization
Achieves state-of-the-art egocentric ASD performance
🔎 Similar Papers
No similar papers found.
Y
Yu Wang
Indiana University Bloomington
Juhyung Ha
Juhyung Ha
Ph.D. student, Indiana University
Computer Vision
D
David J. Crandall
Indiana University Bloomington