🤖 AI Summary
Existing multimodal large language models (MLLMs) primarily target generic visual understanding and lack region-level contextual multimodal understanding (RCMU)—the capability to jointly perceive objects and their associated textual descriptions within image regions. This work proposes RCVIT, a region-level contextual multimodal instruction-tuning framework. We introduce RC&P-Bench—the first large-scale RCMU dataset and comprehensive evaluation benchmark—and design reference-free, fine-grained evaluation metrics. Methodologically, RCVIT aligns visual and textual modalities via bounding-box coordinates and performs visual instruction tuning atop Qwen2-VL, yielding the RC-Qwen2-VL model. Experiments demonstrate that RC-Qwen2-VL significantly outperforms baselines across diverse RCMU tasks. Moreover, it successfully enables downstream applications including multimodal retrieval-augmented generation (RAG) and personalized dialogue, establishing, for the first time, systematic region-granular contextual multimodal understanding.
📝 Abstract
Despite significant progress, existing research on Multimodal Large Language Models (MLLMs) mainly focuses on general visual understanding, overlooking the ability to integrate textual context associated with objects for a more context-aware multimodal understanding -- an ability we refer to as Region-level Context-aware Multimodal Understanding (RCMU). To address this limitation, we first formulate the RCMU task, which requires models to respond to user instructions by integrating both image content and textual information of regions or objects. To equip MLLMs with RCMU capabilities, we propose Region-level Context-aware Visual Instruction Tuning (RCVIT), which incorporates object information into the model input and enables the model to utilize bounding box coordinates to effectively associate objects' visual content with their textual information. To address the lack of datasets, we introduce the RCMU dataset, a large-scale visual instruction tuning dataset that covers multiple RCMU tasks. We also propose RC&P-Bench, a comprehensive benchmark that can evaluate the performance of MLLMs in RCMU and multimodal personalized understanding tasks. Additionally, we propose a reference-free evaluation metric to perform a comprehensive and fine-grained evaluation of the region-level context-aware image descriptions. By performing RCVIT on Qwen2-VL models with the RCMU dataset, we developed RC-Qwen2-VL models. Experimental results indicate that RC-Qwen2-VL models not only achieve outstanding performance on multiple RCMU tasks but also demonstrate successful applications in multimodal RAG and personalized conversation. Our data, model and benchmark are available at https://github.com/hongliang-wei/RC-MLLM