Open Vocabulary Multi-Label Video Classification

πŸ“… 2024-07-12
πŸ›οΈ European Conference on Computer Vision
πŸ“ˆ Citations: 3
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses open-vocabulary multi-label video classification, aiming to jointly recognize multiple concurrent actions and entities in videos. To overcome single-label constraints and enhance fine-grained generalization, we propose three key innovations: (1) an LLM-guided soft-attribute prompting mechanism that generates semantically rich, learnable textual prompts; (2) a lightweight temporal modeling module tailored for CLIP’s visual encoder to strengthen inter-frame dynamic representation; and (3) an end-to-end prompt tuning strategy regularized by contrastive loss. Our approach exclusively leverages pretrained vision-language models (VLMs) and large language models (LLMs) in synergy, requiring no additional annotations. Extensive experiments demonstrate substantial improvements over state-of-the-art methods across multiple benchmarks. Notably, our method exhibits strong generalization to unseen categories and complex spatiotemporal concepts, validating its robustness and scalability in open-vocabulary settings.

Technology Category

Application Category

πŸ“ Abstract
Pre-trained vision-language models (VLMs) have enabled significant progress in open vocabulary computer vision tasks such as image classification, object detection and image segmentation. Some recent works have focused on extending VLMs to open vocabulary single label action classification in videos. However, previous methods fall short in holistic video understanding which requires the ability to simultaneously recognize multiple actions and entities e.g., objects in the video in an open vocabulary setting. We formulate this problem as open vocabulary multilabel video classification and propose a method to adapt a pre-trained VLM such as CLIP to solve this task. We leverage large language models (LLMs) to provide semantic guidance to the VLM about class labels to improve its open vocabulary performance with two key contributions. First, we propose an end-to-end trainable architecture that learns to prompt an LLM to generate soft attributes for the CLIP text-encoder to enable it to recognize novel classes. Second, we integrate a temporal modeling module into CLIP's vision encoder to effectively model the spatio-temporal dynamics of video concepts as well as propose a novel regularized finetuning technique to ensure strong open vocabulary classification performance in the video domain. Our extensive experimentation showcases the efficacy of our approach on multiple benchmark datasets.
Problem

Research questions and friction points this paper is trying to address.

Extends vision-language models to multi-label video classification
Enables simultaneous recognition of multiple actions and entities
Improves open vocabulary performance using semantic guidance from LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-generated soft attributes enhance CLIP text-encoder
Temporal module models spatio-temporal video dynamics
Regularized finetuning maintains open vocabulary performance
πŸ”Ž Similar Papers
No similar papers found.