π€ AI Summary
Pretrained vision-language models (e.g., CLIP) suffer from poor adaptability in few-shot action recognition (FSAR) due to four key limitations: (i) full fine-tuning harms generalization, (ii) weak task-specific visual modeling, (iii) neglect of textual semantic ordering, and (iv) absence of cross-modal temporal coupling modeling. Method: We propose a parameter-efficient dual-path adaptation framework comprising: (i) a task-aware Task-Adapter for the image encoder; (ii) a semantic-order adapter for the text encoder, leveraging LLM-generated sub-action sequence descriptions; and (iii) a fine-grained cross-modal temporal alignment mechanism that jointly models both sub-action semantic order and visual feature temporal dynamicsβthe first such approach in FSAR. Contribution/Results: Our method achieves state-of-the-art performance across five standard benchmarks, demonstrating superior generalization and temporal reasoning. The code is publicly available.
π Abstract
Large-scale pre-trained models have achieved remarkable success in language and image tasks, leading an increasing number of studies to explore the application of pre-trained image models, such as CLIP, in the domain of few-shot action recognition (FSAR). However, current methods generally suffer from several problems: 1) Direct fine-tuning often undermines the generalization capability of the pre-trained model; 2) The exploration of task-specific information is insufficient in the visual tasks; 3) The semantic order information is typically overlooked during text modeling; 4) Existing cross-modal alignment techniques ignore the temporal coupling of multimodal information. To address these, we propose Task-Adapter++, a parameter-efficient dual adaptation method for both image and text encoders. Specifically, to make full use of the variations across different few-shot learning tasks, we design a task-specific adaptation for the image encoder so that the most discriminative information can be well noticed during feature extraction. Furthermore, we leverage large language models (LLMs) to generate detailed sequential sub-action descriptions for each action class, and introduce semantic order adapters into the text encoder to effectively model the sequential relationships between these sub-actions. Finally, we develop an innovative fine-grained cross-modal alignment strategy that actively maps visual features to reside in the same temporal stage as semantic descriptions. Extensive experiments fully demonstrate the effectiveness and superiority of the proposed method, which achieves state-of-the-art performance on 5 benchmarks consistently. The code is open-sourced at https://github.com/Jaulin-Bage/Task-Adapter-pp.