🤖 AI Summary
Few-Shot Action Recognition (FSAR) faces a core challenge: large inter-video temporal structural variance, coupled with diverse action durations and speeds, severely hindering reliable temporal matching. Existing frame-level or tuple-level alignment methods suffer from poor generalizability due to reliance on predefined, length-sensitive alignment units. To address this, we propose a **temporal-agnostic action matching paradigm**: (1) We introduce **pattern tokenization**, a novel representation that encodes global discriminative cues into a fixed number of tokens—eliminating the need for explicit temporal alignment; (2) We design an **inter-class commonality suppression mechanism**, which adaptively identifies and suppresses shared noise across classes to sharpen decision boundaries for novel categories. Our method achieves significant improvements over state-of-the-art approaches across multiple FSAR benchmarks, delivering both superior accuracy and computational efficiency. The source code is publicly available.
📝 Abstract
Few-Shot Action Recognition (FSAR) aims to train a model with only a few labeled video instances. A key challenge in FSAR is handling divergent narrative trajectories for precise video matching. While the frame- and tuple-level alignment approaches have been promising, their methods heavily rely on pre-defined and length-dependent alignment units (e.g., frames or tuples), which limits flexibility for actions of varying lengths and speeds. In this work, we introduce a novel TEmporal Alignment-free Matching (TEAM) approach, which eliminates the need for temporal units in action representation and brute-force alignment during matching. Specifically, TEAM represents each video with a fixed set of pattern tokens that capture globally discriminative clues within the video instance regardless of action length or speed, ensuring its flexibility. Furthermore, TEAM is inherently efficient, using token-wise comparisons to measure similarity between videos, unlike existing methods that rely on pairwise comparisons for temporal alignment. Additionally, we propose an adaptation process that identifies and removes common information across classes, establishing clear boundaries even between novel categories. Extensive experiments demonstrate the effectiveness of TEAM. Codes are available at github.com/leesb7426/TEAM.