🤖 AI Summary
Fine-grained video action recognition (FGAR) faces challenges in modeling subtle temporal variations within localized regions. To address this, we propose the Action-Region Tracking (ART) framework, which dynamically localizes and tracks discriminative action regions via a query-response mechanism, constructing semantically enriched action trajectories. ART introduces two key innovations: a region semantic activation module and a text-constrained query mechanism; it further incorporates multi-level trajectory contrastive loss and spatiotemporal consistency modeling, while employing task-adaptive textual fine-tuning to optimize vision-language alignment. Evaluated on multiple mainstream benchmarks, ART significantly outperforms state-of-the-art methods, achieving consistent improvements in fine-grained discriminability, robustness to perturbations, and cross-domain generalization. By enabling interpretable, traceable, and region-aware modeling, ART establishes a novel paradigm for FGAR that bridges low-level motion dynamics with high-level semantic reasoning.
📝 Abstract
Fine-grained action recognition (FGAR) aims to identify subtle and distinctive differences among fine-grained action categories. However, current recognition methods often capture coarse-grained motion patterns but struggle to identify subtle details in local regions evolving over time. In this work, we introduce the Action-Region Tracking (ART) framework, a novel solution leveraging a query-response mechanism to discover and track the dynamics of distinctive local details, enabling effective distinction of similar actions. Specifically, we propose a region-specific semantic activation module that employs discriminative and text-constrained semantics as queries to capture the most action-related region responses in each video frame, facilitating interaction among spatial and temporal dimensions with corresponding video features. The captured region responses are organized into action tracklets, which characterize region-based action dynamics by linking related responses across video frames in a coherent sequence. The text-constrained queries encode nuanced semantic representations derived from textual descriptions of action labels extracted by language branches within Visual Language Models (VLMs). To optimize the action tracklets, we design a multi-level tracklet contrastive constraint among region responses at spatial and temporal levels, enabling effective discrimination within each frame and correlation between adjacent frames. Additionally, a task-specific fine-tuning mechanism refines textual semantics such that semantic representations encoded by VLMs are preserved while optimized for task preferences. Comprehensive experiments on widely used action recognition benchmarks demonstrate the superiority to previous state-of-the-art baselines.