Zero-shot Action Localization via the Confidence of Large Vision-Language Models

πŸ“… 2024-10-18
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenge of precise action localization in long videos from specialized domains (e.g., surgery, sports), where scarce annotated data hinders supervised learning, this paper introduces ZEALβ€”the first purely zero-shot action localization framework. ZEAL requires no training data or explicit temporal video modeling. Instead, it leverages LLM-distilled action commonsense to generate fine-grained start/end textual descriptions, employs a vision-language model (LVLM) for cross-modal confidence scoring over keyframes, and aggregates frame-level responses for temporal localization. Its core innovation lies in transforming LLM-encoded action priors into queryable textual prompts, bypassing conventional supervised fine-tuning and complex video architectures. Evaluated on multiple challenging benchmarks under strict zero-shot settings, ZEAL achieves state-of-the-art performance, establishing a new paradigm for low-resource domain-specific video understanding.

Technology Category

Application Category

πŸ“ Abstract
Precise action localization in untrimmed video is vital for fields such as professional sports and minimally invasive surgery, where the delineation of particular motions in recordings can dramatically enhance analysis. But in many cases, large scale datasets with video-label pairs for localization are unavailable, limiting the opportunity to fine-tune video-understanding models. Recent developments in large vision-language models (LVLM) address this need with impressive zero-shot capabilities in a variety of video understanding tasks. However, the adaptation of image-based LVLMs, with their powerful visual question answering capabilities, to action localization in long-form video is still relatively unexplored. To this end, we introduce a true ZEro-shot Action Localization method (ZEAL). Specifically, we leverage the built-in action knowledge of a large language model (LLM) to inflate actions into highly-detailed descriptions of the archetypal start and end of the action. These descriptions serve as queries to LVLM for generating frame-level confidence scores which can be aggregated to produce localization outputs. The simplicity and flexibility of our method lends it amenable to more capable LVLMs as they are developed, and we demonstrate remarkable results in zero-shot action localization on a challenging benchmark, without any training.
Problem

Research questions and friction points this paper is trying to address.

Localizing actions in untrimmed videos without labeled data
Adapting vision-language models for zero-shot action localization
Generating frame-level confidence scores for precise action delineation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages LVLM for zero-shot localization
Uses LLM to generate detailed action descriptions
Aggregates frame-level scores for localization
πŸ”Ž Similar Papers
Josiah Aklilu
Josiah Aklilu
PhD student, Stanford University
Artificial IntelligenceComputer vision
X
Xiaohan Wang
Stanford University
S
S. Yeung-Levy
Stanford University