๐ค AI Summary
Multimodal large language models (MLLMs) commonly lack spatial awareness, especially struggling with visual grounding in the absence of explicit spatial supervision. To address this, we propose TWISTโa dual-expert progressive decoding framework for decoder fine-tuningโand SCOUT, a synthetically curated dataset of high-quality localization-description pairs generated via controllable composition. TWIST models human-like localization reasoning through a frozen/learnable expert collaboration mechanism and multi-step, inference-driven decoding, while preserving original vision-language understanding capabilities. SCOUT enables effective training without requiring human-annotated grounding data. Our approach achieves state-of-the-art performance across grounding-aware image captioning, zero-shot localization, and visual grounding benchmarks, with no degradation in pretraining-era image understanding accuracy. Crucially, it introduces the first zero-forgetting visual grounding capability for MLLMs. This work establishes a scalable, low-intervention paradigm for enhancing spatial reasoning in multimodal foundation models.
๐ Abstract
Spatial awareness is key to enable embodied multimodal AI systems. Yet, without vast amounts of spatial supervision, current Multimodal Large Language Models (MLLMs) struggle at this task. In this paper, we introduce TWIST&SCOUT, a framework that equips pre-trained MLLMs with visual grounding ability without forgetting their existing image and language understanding skills. To this end, we propose TWIST, a twin-expert stepwise tuning module that modifies the decoder of the language model using one frozen module pre-trained on image understanding tasks and another learnable one for visual grounding tasks. This allows the MLLM to retain previously learned knowledge and skills, while acquiring what is missing. To fine-tune the model effectively, we generate a high-quality synthetic dataset we call SCOUT, which mimics human reasoning in visual grounding. This dataset provides rich supervision signals, describing a step-by-step multimodal reasoning process, thereby simplifying the task of visual grounding. We evaluate our approach on several standard benchmark datasets, encompassing grounded image captioning, zero-shot localization, and visual grounding tasks. Our method consistently delivers strong performance across all tasks, while retaining the pre-trained image understanding capabilities.