D$^2$ST-Adapter: Disentangled-and-Deformable Spatio-Temporal Adapter for Few-shot Action Recognition

📅 2023-12-03
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
To address overfitting and poor parameter efficiency in video temporal modeling for few-shot action recognition, this paper proposes a lightweight dual-path adapter framework that decouples spatial and temporal feature learning. Its core innovation is an anisotropic deformable spatiotemporal attention module, enabling independent control of sampling density along spatial and temporal dimensions while maintaining efficient 3D global modeling with minimal parameters. The framework is architecture-agnostic, seamlessly integrating with mainstream image backbones such as ViT and ResNet in a plug-and-play manner. Extensive experiments on multiple few-shot action recognition benchmarks demonstrate substantial improvements over state-of-the-art methods—particularly on tasks requiring complex temporal dynamics—validating both the effectiveness and generalizability of spatiotemporal decoupling.
📝 Abstract
Adapting large pre-trained image models to few-shot action recognition has proven to be an effective and efficient strategy for learning robust feature extractors, which is essential for few-shot learning. Typical fine-tuning based adaptation paradigm is prone to overfitting in the few-shot learning scenarios and offers little modeling flexibility for learning temporal features in video data. In this work we present the Disentangled-and-Deformable Spatio-Temporal Adapter (D$^2$ST-Adapter), which is a novel adapter tuning framework well-suited for few-shot action recognition due to lightweight design and low parameter-learning overhead. It is designed in a dual-pathway architecture to encode spatial and temporal features in a disentangled manner. In particular, we devise the anisotropic Deformable Spatio-Temporal Attention module as the core component of D$^2$ST-Adapter, which can be tailored with anisotropic sampling densities along spatial and temporal domains to learn spatial and temporal features specifically in corresponding pathways, allowing our D$^2$ST-Adapter to encode features in a global view in 3D spatio-temporal space while maintaining a lightweight design. Extensive experiments with instantiations of our method on both pre-trained ResNet and ViT demonstrate the superiority of our method over state-of-the-art methods for few-shot action recognition. Our method is particularly well-suited to challenging scenarios where temporal dynamics are critical for action recognition.
Problem

Research questions and friction points this paper is trying to address.

Adapts image models for few-shot video action recognition
Enables disentangled spatial-temporal feature encoding efficiently
Improves recognition in dynamic temporal-critical scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight dual-pathway adapter for video adaptation
Anisotropic deformable spatio-temporal attention mechanism
Disentangled encoding of spatial and temporal features