๐ค AI Summary
Visual relationship detection (VRD) models exhibit poor generalization to unseen relationships, and existing prompt-tuning approaches struggle with reasoning over complex or novel relational concepts. To address this, we propose Adaptive Relation Tuning (ART), the first framework to introduce instruction tuning into VRD. ART integrates dynamic instance selection with visionโlanguage joint modeling to enable zero-shot and few-shot inference on unseen relations. It reformulates VRD data into a unified, structured instruction format; employs adaptive sampling to optimize the training instance distribution; and incorporates relational semantic priors to enhance discriminative capability. Extensive experiments demonstrate that ART significantly outperforms state-of-the-art methods across multiple VRD benchmarks. Moreover, it exhibits strong generalization in downstream semantic segmentation tasks under complex scene conditions, validating its robustness and transferability.
๐ Abstract
Visual relation detection (VRD) is the task of identifying the relationships between objects in a scene. VRD models trained solely on relation detection data struggle to generalize beyond the relations on which they are trained. While prompt tuning has been used to adapt vision-language models (VLMs) for VRD, it uses handcrafted prompts and struggles with novel or complex relations. We argue that instruction tuning offers a more effective solution by fine-tuning VLMs on diverse instructional data. We thus introduce ART, an Adaptive Relation Tuning framework that adapts VLMs for VRD through instruction tuning and strategic instance selection. By converting VRD datasets into an instruction tuning format and employing an adaptive sampling algorithm, ART directs the VLM to focus on informative relations while maintaining generalizability. Specifically, we focus on the relation classification, where subject-object boxes are given and the model predicts the predicate between them. We tune on a held-in set and evaluate across multiple held-out datasets of varying complexity. Our approach strongly improves over its baselines and can infer unseen relation concepts, a capability absent in mainstream VRD methods. We demonstrate ART's practical value by using the predicted relations for segmenting complex scenes.