Boosting Point-supervised Temporal Action Localization via Text Refinement and Alignment

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing pointly supervised temporal action localization methods overlook textual semantics, thereby failing to fully exploit the rich semantic information in visual descriptions. This work proposes a Text Refinement and Alignment (TRA) framework that, for the first time, integrates textual semantics under the point supervision setting. Leveraging a pretrained multimodal model to generate frame-level video captions, TRA introduces a Point-based Text Refinement (PTR) module and a Point-based Multimodal Alignment (PMA) module to enable point-level text optimization and cross-modal feature alignment. The framework further incorporates unified semantic space projection and point-level contrastive learning. Evaluated on five mainstream benchmarks, the proposed method significantly outperforms existing approaches while requiring only a single 24GB RTX 3090 GPU for efficient execution, demonstrating both practicality and scalability.

Technology Category

Application Category

📝 Abstract
Recently, point-supervised temporal action localization has gained significant attention for its effective balance between labeling costs and localization accuracy. However, current methods only consider features from visual inputs, neglecting helpful semantic information from the text side. To address this issue, we propose a Text Refinement and Alignment (TRA) framework that effectively utilizes textual features from visual descriptions to complement the visual features as they are semantically rich. This is achieved by designing two new modules for the original point-supervised framework: a Point-based Text Refinement module (PTR) and a Point-based Multimodal Alignment module (PMA). Specifically, we first generate descriptions for video frames using a pre-trained multimodal model. Next, PTR refines the initial descriptions by leveraging point annotations together with multiple pre-trained models. PMA then projects all features into a unified semantic space and leverages a point-level multimodal feature contrastive learning to reduce the gap between visual and linguistic modalities. Last, the enhanced multi-modal features are fed into the action detector for precise localization. Extensive experimental results on five widely used benchmarks demonstrate the favorable performance of our proposed framework compared to several state-of-the-art methods. Moreover, our computational overhead analysis shows that the framework can run on a single 24 GB RTX 3090 GPU, indicating its practicality and scalability.
Problem

Research questions and friction points this paper is trying to address.

point-supervised temporal action localization
text refinement
multimodal alignment
semantic information
visual-textual features
Innovation

Methods, ideas, or system contributions that make the work stand out.

point-supervised temporal action localization
text refinement
multimodal alignment
contrastive learning
semantic space
🔎 Similar Papers
No similar papers found.