Task-Adapter++: Task-specific Adaptation with Order-aware Alignment for Few-shot Action Recognition

πŸ“… 2025-05-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Pretrained vision-language models (e.g., CLIP) suffer from poor adaptability in few-shot action recognition (FSAR) due to four key limitations: (i) full fine-tuning harms generalization, (ii) weak task-specific visual modeling, (iii) neglect of textual semantic ordering, and (iv) absence of cross-modal temporal coupling modeling. Method: We propose a parameter-efficient dual-path adaptation framework comprising: (i) a task-aware Task-Adapter for the image encoder; (ii) a semantic-order adapter for the text encoder, leveraging LLM-generated sub-action sequence descriptions; and (iii) a fine-grained cross-modal temporal alignment mechanism that jointly models both sub-action semantic order and visual feature temporal dynamicsβ€”the first such approach in FSAR. Contribution/Results: Our method achieves state-of-the-art performance across five standard benchmarks, demonstrating superior generalization and temporal reasoning. The code is publicly available.

Technology Category

Application Category

πŸ“ Abstract
Large-scale pre-trained models have achieved remarkable success in language and image tasks, leading an increasing number of studies to explore the application of pre-trained image models, such as CLIP, in the domain of few-shot action recognition (FSAR). However, current methods generally suffer from several problems: 1) Direct fine-tuning often undermines the generalization capability of the pre-trained model; 2) The exploration of task-specific information is insufficient in the visual tasks; 3) The semantic order information is typically overlooked during text modeling; 4) Existing cross-modal alignment techniques ignore the temporal coupling of multimodal information. To address these, we propose Task-Adapter++, a parameter-efficient dual adaptation method for both image and text encoders. Specifically, to make full use of the variations across different few-shot learning tasks, we design a task-specific adaptation for the image encoder so that the most discriminative information can be well noticed during feature extraction. Furthermore, we leverage large language models (LLMs) to generate detailed sequential sub-action descriptions for each action class, and introduce semantic order adapters into the text encoder to effectively model the sequential relationships between these sub-actions. Finally, we develop an innovative fine-grained cross-modal alignment strategy that actively maps visual features to reside in the same temporal stage as semantic descriptions. Extensive experiments fully demonstrate the effectiveness and superiority of the proposed method, which achieves state-of-the-art performance on 5 benchmarks consistently. The code is open-sourced at https://github.com/Jaulin-Bage/Task-Adapter-pp.
Problem

Research questions and friction points this paper is trying to address.

Preserves pre-trained model generalization in FSAR
Enhances task-specific visual feature extraction
Models sequential sub-action relationships in text
Innovation

Methods, ideas, or system contributions that make the work stand out.

Task-specific adaptation for image encoder
Semantic order adapters for text encoder
Fine-grained cross-modal alignment strategy
πŸ”Ž Similar Papers
No similar papers found.
Congqi Cao
Congqi Cao
School of Computer Science, Northwestern Polytechnical University
Computer VisionAction Recognition
P
Peiheng Han
School of Computer Science, Northwestern Polytechnical University, China
Y
Yueran Zhang
School of Computer Science, Northwestern Polytechnical University, China
Yating Yu
Yating Yu
Northwestern Polytechnical University
Video Understanding
Q
Qinyi Lv
School of Electronics and Information, Northwestern Polytechnical University, China
L
Lingtong Min
School of Electronics and Information, Northwestern Polytechnical University, China
Yanning Zhang
Yanning Zhang
Northwestern Polytechnical University
Computer Vision