Efficient Transfer Learning for Video-language Foundation Models

📅 2024-11-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In video action recognition, existing temporal modeling approaches often introduce excessive parameters and cause catastrophic forgetting of generic knowledge. To address this, we propose a parameter-efficient Multimodal Spatiotemporal Adapter (MSTA) that aligns visual and linguistic representations via lightweight adapters while keeping the backbone frozen. Furthermore, we design a spatiotemporal description-guided consistency distillation mechanism: fine-grained spatiotemporal descriptions—generated by a large language model—serve as cross-modal constraints to regularize both vision and language branches, mitigating overfitting and enhancing semantic discriminability. Our method achieves state-of-the-art performance across four evaluation paradigms: zero-shot, few-shot, base-to-novel class transfer, and fully supervised learning. Crucially, it requires only 2–7% of the original model’s trainable parameters, striking a superior balance among temporal modeling capability, generalization, and parameter efficiency.

Technology Category

Application Category

📝 Abstract
Pre-trained vision-language models provide a robust foundation for efficient transfer learning across various downstream tasks. In the field of video action recognition, mainstream approaches often introduce additional modules to capture temporal information. Although the additional modules increase the capacity of model, enabling it to better capture video-specific inductive biases, existing methods typically introduce a substantial number of new parameters and are prone to catastrophic forgetting of previously acquired generalizable knowledge. In this paper, we propose a parameter-efficient Multi-modal Spatio-Temporal Adapter (MSTA) to enhance the alignment between textual and visual representations, achieving a balance between generalizable knowledge and task-specific adaptation. Furthermore, to mitigate over-fitting and enhance generalizability, we introduce a spatio-temporal description-guided consistency constraint.This constraint involves providing template inputs (e.g.,"a video of { extbf{cls}}") to the trainable language branch and LLM-generated spatio-temporal descriptions to the pre-trained language branch, enforcing output consistency between the branches. This approach reduces overfitting to downstream tasks and enhances the distinguishability of the trainable branch within the spatio-temporal semantic space. We evaluate the effectiveness of our approach across four tasks: zero-shot transfer, few-shot learning, base-to-novel generalization, and fully-supervised learning. Compared to many state-of-the-art methods, our MSTA achieves outstanding performance across all evaluations, while using only 2-7% of the trainable parameters in the original model.
Problem

Research questions and friction points this paper is trying to address.

Enhance video-text alignment with fewer parameters
Mitigate overfitting in video action recognition
Balance generalizable knowledge and task-specific adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-efficient Multi-modal Spatio-Temporal Adapter (MSTA)
Spatio-temporal description-guided consistency constraint
Reduced trainable parameters to 2-7%
🔎 Similar Papers
No similar papers found.
H
Haoxing Chen
Tiansuan Lab, Ant Group
Z
Zizheng Huang
Nanjing University
Y
Yan Hong
Tiansuan Lab, Ant Group
Y
Yanshuo Wang
Australian National University
Z
Zhongcai Lyu
Tiansuan Lab, Ant Group
Zhuoer Xu
Zhuoer Xu
Tiansuan Lab, Ant Group
Jun Lan
Jun Lan
Ant Group
Zhangxuan Gu
Zhangxuan Gu
Ant Group
computer vision