End-to-end Open-vocabulary Video Visual Relationship Detection using Multi-modal Prompting

📅 2024-09-19
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Open-vocabulary video visual relation detection aims to identify unseen relationships among both seen and unseen objects in videos; however, existing approaches rely on closed-set trajectory detectors, limiting their generalization capability. This paper proposes an end-to-end framework that jointly performs trajectory detection and relation classification, enabling unified recognition of unseen objects and relations. Our key contributions are: (1) a relation-aware open-vocabulary trajectory detector, eliminating dependence on pre-trained trajectory models; (2) a multimodal prompting mechanism integrating spatiotemporal visual prompts with vision-guided linguistic prompts; and (3) a CLIP-distillation-based query-driven Transformer decoder, coupled with a trajectory association module and auxiliary relation loss. Experiments demonstrate significant performance gains on VidVRD and VidOR, along with strong cross-dataset generalization.

Technology Category

Application Category

📝 Abstract
Open-vocabulary video visual relationship detection aims to expand video visual relationship detection beyond annotated categories by detecting unseen relationships between both seen and unseen objects in videos. Existing methods usually use trajectory detectors trained on closed datasets to detect object trajectories, and then feed these trajectories into large-scale pre-trained vision-language models to achieve open-vocabulary classification. Such heavy dependence on the pre-trained trajectory detectors limits their ability to generalize to novel object categories, leading to performance degradation. To address this challenge, we propose to unify object trajectory detection and relationship classification into an end-to-end open-vocabulary framework. Under this framework, we propose a relationship-aware open-vocabulary trajectory detector. It primarily consists of a query-based Transformer decoder, where the visual encoder of CLIP is distilled for frame-wise open-vocabulary object detection, and a trajectory associator. To exploit relationship context during trajectory detection, a relationship query is embedded into the Transformer decoder, and accordingly, an auxiliary relationship loss is designed to enable the decoder to perceive the relationships between objects explicitly. Moreover, we propose an open-vocabulary relationship classifier that leverages the rich semantic knowledge of CLIP to discover novel relationships. To adapt CLIP well to relationship classification, we design a multi-modal prompting method that employs spatio-temporal visual prompting for visual representation and vision-guided language prompting for language input. Extensive experiments on two public datasets, VidVRD and VidOR, demonstrate the effectiveness of our framework. Our framework is also applied to a more difficult cross-dataset scenario to further demonstrate its generalization ability.
Problem

Research questions and friction points this paper is trying to address.

Detect unseen object relationships in videos
Reduce dependence on pre-trained trajectory detectors
Improve generalization to novel object categories
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end open-vocabulary video relationship framework
Relationship-aware trajectory detector with Transformer decoder
Multi-modal prompting for CLIP adaptation
🔎 Similar Papers
No similar papers found.
Yongqi Wang
Yongqi Wang
Zhejiang University
SpeechAudioDeep Learning
S
Shuo Yang
Guangdong Provincial Laboratory of Machine Perception and Intelligent Computing, Shenzhen MSU-BIT University, Shenzhen 518172, China
X
Xinxiao Wu
Beijing Laboratory of Intelligent Information Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China, and also with the Guangdong Provincial Laboratory of Machine Perception and Intelligent Computing, Shenzhen MSU-BIT University, Shenzhen 518172, China
J
Jiebo Luo
Department of Computer Science, University of Rochester, Rochester, NY 14627 USA