Building a Multi-modal Spatiotemporal Expert for Zero-shot Action Recognition with CLIP

📅 2024-12-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address CLIP’s limitations in modeling fine-grained spatiotemporal dynamics and aligning action semantics for zero-shot action recognition (ZSAR), this paper proposes a vision-language collaborative multimodal spatiotemporal understanding framework. Methodologically, it introduces: (1) a lightweight spatiotemporal cross-attention module that enables frame-level temporal modeling and visual feature enhancement; and (2) a text prompt augmentation mechanism grounded in an Action Semantic Knowledge Graph (ASKG), facilitating parameter-free, frame–prompt-level precise alignment. The framework performs efficient fine-tuning while keeping the CLIP backbone frozen, substantially improving generalization to unseen action classes. Extensive experiments demonstrate state-of-the-art performance on Kinetics-600, UCF101, and HMDB51, validating the critical role of joint multimodal spatiotemporal modeling in ZSAR.

Technology Category

Application Category

📝 Abstract
Zero-shot action recognition (ZSAR) requires collaborative multi-modal spatiotemporal understanding. However, finetuning CLIP directly for ZSAR yields suboptimal performance, given its inherent constraints in capturing essential temporal dynamics from both vision and text perspectives, especially when encountering novel actions with fine-grained spatiotemporal discrepancies. In this work, we propose Spatiotemporal Dynamic Duo (STDD), a novel CLIP-based framework to comprehend multi-modal spatiotemporal dynamics synergistically. For the vision side, we propose an efficient Space-time Cross Attention, which captures spatiotemporal dynamics flexibly with simple yet effective operations applied before and after spatial attention, without adding additional parameters or increasing computational complexity. For the semantic side, we conduct spatiotemporal text augmentation by comprehensively constructing an Action Semantic Knowledge Graph (ASKG) to derive nuanced text prompts. The ASKG elaborates on static and dynamic concepts and their interrelations, based on the idea of decomposing actions into spatial appearances and temporal motions. During the training phase, the frame-level video representations are meticulously aligned with prompt-level nuanced text representations, which are concurrently regulated by the video representations from the frozen CLIP to enhance generalizability. Extensive experiments validate the effectiveness of our approach, which consistently surpasses state-of-the-art approaches on popular video benchmarks (i.e., Kinetics-600, UCF101, and HMDB51) under challenging ZSAR settings.
Problem

Research questions and friction points this paper is trying to address.

Zero-shot action recognition challenges
Multi-modal spatiotemporal dynamics understanding
CLIP-based framework for novel actions
Innovation

Methods, ideas, or system contributions that make the work stand out.

CLIP-based spatiotemporal framework
Space-time Cross Attention mechanism
Action Semantic Knowledge Graph
🔎 Similar Papers
No similar papers found.
Yating Yu
Yating Yu
Northwestern Polytechnical University
Video Understanding
Congqi Cao
Congqi Cao
School of Computer Science, Northwestern Polytechnical University
Computer VisionAction Recognition
Y
Yueran Zhang
Northwestern Polytechnical University, Xi’an Shaanxi, 710129, P.R.China.
Q
Qinyi Lv
Northwestern Polytechnical University, Xi’an Shaanxi, 710129, P.R.China.
L
Lingtong Min
Northwestern Polytechnical University, Xi’an Shaanxi, 710129, P.R.China.
Yanning Zhang
Yanning Zhang
Northwestern Polytechnical University
Computer Vision