🤖 AI Summary
Open-vocabulary video visual relation detection aims to identify unseen relationships among both seen and unseen objects in videos; however, existing approaches rely on closed-set trajectory detectors, limiting their generalization capability. This paper proposes an end-to-end framework that jointly performs trajectory detection and relation classification, enabling unified recognition of unseen objects and relations. Our key contributions are: (1) a relation-aware open-vocabulary trajectory detector, eliminating dependence on pre-trained trajectory models; (2) a multimodal prompting mechanism integrating spatiotemporal visual prompts with vision-guided linguistic prompts; and (3) a CLIP-distillation-based query-driven Transformer decoder, coupled with a trajectory association module and auxiliary relation loss. Experiments demonstrate significant performance gains on VidVRD and VidOR, along with strong cross-dataset generalization.
📝 Abstract
Open-vocabulary video visual relationship detection aims to expand video visual relationship detection beyond annotated categories by detecting unseen relationships between both seen and unseen objects in videos. Existing methods usually use trajectory detectors trained on closed datasets to detect object trajectories, and then feed these trajectories into large-scale pre-trained vision-language models to achieve open-vocabulary classification. Such heavy dependence on the pre-trained trajectory detectors limits their ability to generalize to novel object categories, leading to performance degradation. To address this challenge, we propose to unify object trajectory detection and relationship classification into an end-to-end open-vocabulary framework. Under this framework, we propose a relationship-aware open-vocabulary trajectory detector. It primarily consists of a query-based Transformer decoder, where the visual encoder of CLIP is distilled for frame-wise open-vocabulary object detection, and a trajectory associator. To exploit relationship context during trajectory detection, a relationship query is embedded into the Transformer decoder, and accordingly, an auxiliary relationship loss is designed to enable the decoder to perceive the relationships between objects explicitly. Moreover, we propose an open-vocabulary relationship classifier that leverages the rich semantic knowledge of CLIP to discover novel relationships. To adapt CLIP well to relationship classification, we design a multi-modal prompting method that employs spatio-temporal visual prompting for visual representation and vision-guided language prompting for language input. Extensive experiments on two public datasets, VidVRD and VidOR, demonstrate the effectiveness of our framework. Our framework is also applied to a more difficult cross-dataset scenario to further demonstrate its generalization ability.