UWAV: Uncertainty-weighted Weakly-supervised Audio-Visual Video Parsing

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses weakly supervised audio-visual video parsing (AVVP), aiming to accurately localize spatiotemporal boundaries of unimodal (visual-only or auditory-only) and multimodal (audio-visual synchronized) events using only video-level labels. To mitigate pseudo-label noise and modality bias, we propose, for the first time, an uncertainty-weighted pseudo-labeling mechanism, integrated with cross-modal feature Mixup and temporal dependency modeling to enhance pseudo-label reliability and cross-modal alignment. Evaluated on two mainstream benchmarks, our method achieves comprehensive improvements over state-of-the-art approaches, with significant gains in mean Average Precision (mAP), demonstrating strong generalization and robustness. The core contributions are: (1) an uncertainty-aware pseudo-label optimization framework that dynamically weights pseudo-labels based on prediction confidence; and (2) a multimodal collaborative regularization strategy tailored for weakly supervised AVVP, jointly enforcing cross-modal consistency and temporal coherence.

Technology Category

Application Category

📝 Abstract
Audio-Visual Video Parsing (AVVP) entails the challenging task of localizing both uni-modal events (i.e., those occurring exclusively in either the visual or acoustic modality of a video) and multi-modal events (i.e., those occurring in both modalities concurrently). Moreover, the prohibitive cost of annotating training data with the class labels of all these events, along with their start and end times, imposes constraints on the scalability of AVVP techniques unless they can be trained in a weakly-supervised setting, where only modality-agnostic, video-level labels are available in the training data. To this end, recently proposed approaches seek to generate segment-level pseudo-labels to better guide model training. However, the absence of inter-segment dependencies when generating these pseudo-labels and the general bias towards predicting labels that are absent in a segment limit their performance. This work proposes a novel approach towards overcoming these weaknesses called Uncertainty-weighted Weakly-supervised Audio-visual Video Parsing (UWAV). Additionally, our innovative approach factors in the uncertainty associated with these estimated pseudo-labels and incorporates a feature mixup based training regularization for improved training. Empirical results show that UWAV outperforms state-of-the-art methods for the AVVP task on multiple metrics, across two different datasets, attesting to its effectiveness and generalizability.
Problem

Research questions and friction points this paper is trying to address.

Localizing uni-modal and multi-modal events in videos
Overcoming weakly-supervised training limitations in AVVP
Improving pseudo-label accuracy with uncertainty weighting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncertainty-weighted pseudo-labels for AVVP
Feature mixup training regularization
Weakly-supervised audio-visual parsing
🔎 Similar Papers
No similar papers found.
Yung-Hsuan Lai
Yung-Hsuan Lai
M.S. student @ National Taiwan University
Computer VisionCross-modality Learning
Janek Ebbers
Janek Ebbers
Paderborn University
Yu-Chiang Frank Wang
Yu-Chiang Frank Wang
National Taiwan University & NVIDIA
Computer VisionDeep LearningMachine LearningArtificial Intelligence
F
Franccois Germain
Mitsubishi Electric Research Labs (MERL)
M
Michael Jeffrey Jones
Mitsubishi Electric Research Labs (MERL)
M
Moitreya Chatterjee
Mitsubishi Electric Research Labs (MERL)