Towards Mitigating Modality Bias in Vision-Language Models for Temporal Action Localization

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the susceptibility of existing vision-language models to linguistic priors in temporal action localization, which often induces modality bias and degrades visual performance. To mitigate this issue, the authors propose ActionVLM, a novel framework that introduces a dynamic reweighting mechanism to assess the incremental gain of language cues over purely visual predictions. Language information is then incorporated as a residual complement to the vision-dominant signal. By combining debiased reweighting with residual aggregation, the method effectively suppresses language-induced overconfidence and enhances vision-centric temporal reasoning. Evaluated on the THUMOS14 dataset, ActionVLM achieves up to a 3.2% improvement in mean average precision (mAP), significantly outperforming current state-of-the-art approaches.

Technology Category

Application Category

📝 Abstract
Temporal Action Localization (TAL) requires identifying both the boundaries and categories of actions in untrimmed videos. While vision-language models (VLMs) offer rich semantics to complement visual evidence, existing approaches tend to overemphasize linguistic priors at the expense of visual performance, leading to a pronounced modality bias. We propose ActionVLM, a vision-language aggregation framework that systematically mitigates modality bias in TAL. Our key insight is to preserve vision as the dominant signal while adaptively exploiting language only when beneficial. To this end, we introduce (i) a debiasing reweighting module that estimates the language advantage-the incremental benefit of language over vision-only predictions-and dynamically reweights language modality accordingly, and (ii) a residual aggregation strategy that treats language as a complementary refinement rather than the primary driver. This combination alleviates modality bias, reduces overconfidence from linguistic priors, and strengthens temporal reasoning. Experiments on THUMOS14 show that our model outperforms state-of-the-art by up to 3.2% mAP.
Problem

Research questions and friction points this paper is trying to address.

modality bias
vision-language models
temporal action localization
linguistic priors
visual performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

modality bias
vision-language models
temporal action localization
adaptive reweighting
residual aggregation
🔎 Similar Papers
No similar papers found.
Jiaqi Li
Jiaqi Li
Huazhong University of Science and Technology
Computer VisionDepth Estimation
Guangming Wang
Guangming Wang
University of Cambridge, ETH Zurich, and Shanghai Jiao Tong University
Robot VisionRobot ManipulationRoboticsComputer VisionAutonomous Driving
S
Shuntian Zheng
UVLab, Department of Computer Science, University of Warwick
M
Minzhe Ni
UVLab, Department of Computer Science, University of Warwick
X
Xiaoman Lu
UVLab, Department of Computer Science, University of Warwick
G
Guanghui Ye
College of Computer Science and Electronic Engineering, Hunan University
Yu Guan
Yu Guan
Associate Professor, University of Warwick, UK
Activity RecognitionAI for HealthcareUbiquitous ComputingVisual ComputingMachine Learning