🤖 AI Summary
This work addresses the limited accuracy of multimodal large language models (MLLMs) in referring expression comprehension (REC) by proposing a training-free contextual augmentation framework. The method presents the first systematic analysis of how tool-generated textual descriptions and visual context jointly influence REC performance, integrating chain-of-thought reasoning to dynamically compose multi-source contextual information during inference. This approach enhances the model’s capacity to ground referring expressions without requiring any additional training. Extensive experiments demonstrate that the proposed strategy consistently improves performance across multiple benchmarks—RefCOCO, RefCOCO+, RefCOCOg, and Ref-L4—achieving absolute accuracy gains of 5% to 30% over baseline models under varying IoU thresholds.
📝 Abstract
Given a textual description, the task of referring expression comprehension (REC) involves the localisation of the referred object in an image. Multimodal large language models (MLLMs) have achieved high accuracy on REC benchmarks through scaling up the model size and training data. Moreover, the performance of MLLMs can be further improved using techniques such as Chain-of-Thought and tool use, which provides additional visual or textual context to the model. In this paper, we analyse the effect of various techniques for providing additional visual and textual context via tool use to the MLLM and its effect on the REC task. Furthermore, we propose a training-free framework named Chain-of-Caption to improve the REC performance of MLLMs. We perform experiments on RefCOCO/RefCOCOg/RefCOCO+ and Ref-L4 datasets and show that individual textual or visual context can improve the REC performance without any fine-tuning. By combining multiple contexts, our training-free framework shows between 5% to 30% performance gain over the baseline model on accuracy at various Intersection over Union (IoU) thresholds.