🤖 AI Summary
This work addresses the reliance on post-processing steps—such as IoU-based matching and temporal segmentation—in spatio-temporal action detection. We propose an end-to-end framework for generating action tubes, eliminating the need for hand-crafted heuristics. Our core innovation is the Query Matching Module (QMM), built upon the DETR architecture: it performs frame-level detection and then achieves person-level cross-frame query alignment via metric learning, supporting variable-length video inputs and jointly optimizing action localization and classification. By removing conventional post-processing, our method enables truly end-to-end training and inference. Experiments on JHMDB, UCF101-24, and AVA demonstrate significant improvements in detecting actions with large displacements, while simultaneously reducing computational overhead and GPU memory consumption. These results validate the method’s efficiency and generalization capability across diverse benchmarks.
📝 Abstract
This paper proposes a method for spatio-temporal action detection (STAD) that directly generates action tubes from the original video without relying on post-processing steps such as IoU-based linking and clip splitting. Our approach applies query-based detection (DETR) to each frame and matches DETR queries to link the same person across frames. We introduce the Query Matching Module (QMM), which uses metric learning to bring queries for the same person closer together across frames compared to queries for different people. Action classes are predicted using the sequence of queries obtained from QMM matching, allowing for variable-length inputs from videos longer than a single clip. Experimental results on JHMDB, UCF101-24, and AVA datasets demonstrate that our method performs well for large position changes of people while offering superior computational efficiency and lower resource requirements.