A Renaissance of Explicit Motion Information Mining from Transformers for Action Recognition

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Transformer-based action recognition methods suffer from limited performance on motion-sensitive datasets (e.g., Something-Something), primarily due to the lack of explicit modeling of fine-grained motion cues. To address this, we propose the Explicit Motion Information Mining module (EMIM), the first to incorporate the optical flow cost volume paradigm into Transformer architectures. EMIM constructs a motion affinity matrix by sampling tokens across adjacent frames via sliding windows, decouples appearance and motion feature learning into dual parallel pathways, and explicitly integrates motion cues into self-attention computation. This design enables end-to-end differentiable extraction and utilization of motion features. Extensive experiments demonstrate that our method achieves state-of-the-art performance on four major benchmarks—including substantial gains on Something-Something V1 and V2—validating the critical role of explicit motion modeling in action recognition.

Technology Category

Application Category

📝 Abstract
Recently, action recognition has been dominated by transformer-based methods, thanks to their spatiotemporal contextual aggregation capacities. However, despite the significant progress achieved on scene-related datasets, they do not perform well on motion-sensitive datasets due to the lack of elaborate motion modeling designs. Meanwhile, we observe that the widely-used cost volume in traditional action recognition is highly similar to the affinity matrix defined in self-attention, but equipped with powerful motion modeling capacities. In light of this, we propose to integrate those effective motion modeling properties into the existing transformer in a unified and neat way, with the proposal of the Explicit Motion Information Mining module (EMIM). In EMIM, we propose to construct the desirable affinity matrix in a cost volume style, where the set of key candidate tokens is sampled from the query-based neighboring area in the next frame in a sliding-window manner. Then, the constructed affinity matrix is used to aggregate contextual information for appearance modeling and is converted into motion features for motion modeling as well. We validate the motion modeling capacities of our method on four widely-used datasets, and our method performs better than existing state-of-the-art approaches, especially on motion-sensitive datasets, i.e., Something-Something V1 & V2.
Problem

Research questions and friction points this paper is trying to address.

Integrate cost-volume motion modeling into transformers for action recognition
Address poor performance on motion-sensitive datasets with explicit motion mining
Construct affinity matrices from inter-frame tokens to enhance motion features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates cost volume-style affinity matrix into transformers
Samples key tokens from next-frame neighboring areas
Converts affinity matrix into motion features for modeling
🔎 Similar Papers
No similar papers found.