🤖 AI Summary
This work addresses the susceptibility of existing vision-language models to linguistic priors in temporal action localization, which often induces modality bias and degrades visual performance. To mitigate this issue, the authors propose ActionVLM, a novel framework that introduces a dynamic reweighting mechanism to assess the incremental gain of language cues over purely visual predictions. Language information is then incorporated as a residual complement to the vision-dominant signal. By combining debiased reweighting with residual aggregation, the method effectively suppresses language-induced overconfidence and enhances vision-centric temporal reasoning. Evaluated on the THUMOS14 dataset, ActionVLM achieves up to a 3.2% improvement in mean average precision (mAP), significantly outperforming current state-of-the-art approaches.
📝 Abstract
Temporal Action Localization (TAL) requires identifying both the boundaries and categories of actions in untrimmed videos. While vision-language models (VLMs) offer rich semantics to complement visual evidence, existing approaches tend to overemphasize linguistic priors at the expense of visual performance, leading to a pronounced modality bias. We propose ActionVLM, a vision-language aggregation framework that systematically mitigates modality bias in TAL. Our key insight is to preserve vision as the dominant signal while adaptively exploiting language only when beneficial. To this end, we introduce (i) a debiasing reweighting module that estimates the language advantage-the incremental benefit of language over vision-only predictions-and dynamically reweights language modality accordingly, and (ii) a residual aggregation strategy that treats language as a complementary refinement rather than the primary driver. This combination alleviates modality bias, reduces overconfidence from linguistic priors, and strengthens temporal reasoning. Experiments on THUMOS14 show that our model outperforms state-of-the-art by up to 3.2% mAP.