VideoGEM: Training-free Action Grounding in Videos

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the zero-shot action spatial localization problem in videos—particularly challenging when actions lack distinct physical boundaries and rely on high-level semantic descriptions. We propose the first training-free zero-shot action grounding framework. Methodologically, we pioneer the use of pre-trained vision–language models (e.g., CLIP, OpenCLIP, ViCLIP) for action–spatial alignment; enhance the GEM self-attention architecture with hierarchical weighting and dynamic layer-wise weight tuning; and introduce a three-level prompt decomposition strategy (action–verb–object) to improve semantic disentanglement and localization accuracy. Evaluated on four major benchmarks—V-HICO, DALY, HICO-DET, and V-COCO—our approach consistently outperforms state-of-the-art supervised methods, demonstrating strong effectiveness, generalizability, and practicality in zero-shot action localization.

Technology Category

Application Category

📝 Abstract
Vision-language foundation models have shown impressive capabilities across various zero-shot tasks, including training-free localization and grounding, primarily focusing on localizing objects in images. However, leveraging those capabilities to localize actions and events in videos is challenging, as actions have less physical outline and are usually described by higher-level concepts. In this work, we propose VideoGEM, the first training-free spatial action grounding method based on pretrained image- and video-language backbones. Namely, we adapt the self-self attention formulation of GEM to spatial activity grounding. We observe that high-level semantic concepts, such as actions, usually emerge in the higher layers of the image- and video-language models. We, therefore, propose a layer weighting in the self-attention path to prioritize higher layers. Additionally, we introduce a dynamic weighting method to automatically tune layer weights to capture each layer`s relevance to a specific prompt. Finally, we introduce a prompt decomposition, processing action, verb, and object prompts separately, resulting in a better spatial localization of actions. We evaluate the proposed approach on three image- and video-language backbones, CLIP, OpenCLIP, and ViCLIP, and on four video grounding datasets, V-HICO, DALY, YouCook-Interactions, and GroundingYouTube, showing that the proposed training-free approach is able to outperform current trained state-of-the-art approaches for spatial video grounding.
Problem

Research questions and friction points this paper is trying to address.

Localizing actions in videos without training
Prioritizing higher layers for semantic concepts
Automatically tuning layer weights for prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts GEM self-attention for action grounding
Introduces dynamic layer weighting for relevance
Decomposes prompts for better spatial localization
🔎 Similar Papers