🤖 AI Summary
This work addresses the challenging problem of prior-free, class-agnostic video-based repetitive action counting under realistic disturbances such as action interruptions and temporal inconsistencies. To enhance counting robustness, we propose a localization-aware multi-scale representation learning framework. Its core components are: (1) a periodicity-aware multi-scale representation (MPR) module that explicitly models cross-scale temporal periodicity; and (2) a repetitive foreground localization (RFL) module that achieves precise action region localization and noise suppression via foreground-guided similarity learning and global semantic fusion. These modules are jointly optimized in an end-to-end manner, significantly improving both discriminability and robustness of periodic action representations. Extensive experiments demonstrate state-of-the-art performance on RepCountA and UCFRep benchmarks, with strong generalization across diverse scenes and action categories.
📝 Abstract
Repetitive action counting (RAC) aims to estimate the number of class-agnostic action occurrences in a video without exemplars. Most current RAC methods rely on a raw frame-to-frame similarity representation for period prediction. However, this approach can be significantly disrupted by common noise such as action interruptions and inconsistencies, leading to sub-optimal counting performance in realistic scenarios. In this paper, we introduce a foreground localization optimization objective into similarity representation learning to obtain more robust and efficient video features. We propose a Localization-Aware Multi-Scale Representation Learning (LMRL) framework. Specifically, we apply a Multi-Scale Period-Aware Representation (MPR) with a scale-specific design to accommodate various action frequencies and learn more flexible temporal correlations. Furthermore, we introduce the Repetition Foreground Localization (RFL) method, which enhances the representation by coarsely identifying periodic actions and incorporating global semantic information. These two modules can be jointly optimized, resulting in a more discerning periodic action representation. Our approach significantly reduces the impact of noise, thereby improving counting accuracy. Additionally, the framework is designed to be scalable and adaptable to different types of video content. Experimental results on the RepCountA and UCFRep datasets demonstrate that our proposed method effectively handles repetitive action counting.