TAMT: Temporal-Aware Model Tuning for Cross-Domain Few-Shot Action Recognition

πŸ“… 2024-11-28
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the high computational cost of source-target joint training and underutilized potential of pre-trained models in cross-domain few-shot action recognition (CDFSAR), this paper proposes a decoupled temporal-aware model tuning framework. The core innovation is a Hierarchical Temporal Tuning Network (HTTN), which introduces a lightweight Temporal Adaptation Adapter (TAA) for frame-level dynamic adaptation and integrates Global Temporal Moment Tuning (GTMT) for video-level semantic alignmentβ€”all while keeping the backbone network frozen to enable targeted fine-tuning. Additionally, a hierarchical feature recalibration mechanism is designed to enhance cross-domain generalization. Evaluated on multiple video benchmarks, the method consistently outperforms state-of-the-art approaches, achieving average accuracy gains of 13%–31%. It is the first to realize efficient, low-overhead, and high-accuracy CDFSAR, significantly reducing computational demands without sacrificing performance.

Technology Category

Application Category

πŸ“ Abstract
Going beyond few-shot action recognition (FSAR), cross-domain FSAR (CDFSAR) has attracted recent research interests by solving the domain gap lying in source-to-target transfer learning. Existing CDFSAR methods mainly focus on joint training of source and target data to mitigate the side effect of domain gap. However, such kind of methods suffer from two limitations: First, pair-wise joint training requires retraining deep models in case of one source data and multiple target ones, which incurs heavy computation cost, especially for large source and small target data. Second, pre-trained models after joint training are adopted to target domain in a straightforward manner, hardly taking full potential of pre-trained models and then limiting recognition performance. To overcome above limitations, this paper proposes a simple yet effective baseline, namely Temporal-Aware Model Tuning (TAMT) for CDFSAR. Specifically, our TAMT involves a decoupled paradigm by performing pre-training on source data and fine-tuning target data, which avoids retraining for multiple target data with single source. To effectively and efficiently explore the potential of pre-trained models in transferring to target domain, our TAMT proposes a Hierarchical Temporal Tuning Network (HTTN), whose core involves local temporal-aware adapters (TAA) and a global temporal-aware moment tuning (GTMT). Particularly, TAA learns few parameters to recalibrate the intermediate features of frozen pre-trained models, enabling efficient adaptation to target domains. Furthermore, GTMT helps to generate powerful video representations, improving match performance on the target domain. Experiments on several widely used video benchmarks show our TAMT outperforms the recently proposed counterparts by 13%$sim$31%, achieving new state-of-the-art CDFSAR results.
Problem

Research questions and friction points this paper is trying to address.

Addresses domain gap in cross-domain few-shot action recognition
Eliminates retraining for multiple target datasets with single source
Enhances pre-trained model potential via temporal-aware tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupled pre-training and fine-tuning paradigm
Hierarchical Temporal Tuning Network (HTTN)
Local temporal-aware adapters and global moment tuning
πŸ”Ž Similar Papers
No similar papers found.