TWIST&SCOUT: Grounding Multimodal LLM-Experts by Forget-Free Tuning

๐Ÿ“… 2024-10-14
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Multimodal large language models (MLLMs) commonly lack spatial awareness, especially struggling with visual grounding in the absence of explicit spatial supervision. To address this, we propose TWISTโ€”a dual-expert progressive decoding framework for decoder fine-tuningโ€”and SCOUT, a synthetically curated dataset of high-quality localization-description pairs generated via controllable composition. TWIST models human-like localization reasoning through a frozen/learnable expert collaboration mechanism and multi-step, inference-driven decoding, while preserving original vision-language understanding capabilities. SCOUT enables effective training without requiring human-annotated grounding data. Our approach achieves state-of-the-art performance across grounding-aware image captioning, zero-shot localization, and visual grounding benchmarks, with no degradation in pretraining-era image understanding accuracy. Crucially, it introduces the first zero-forgetting visual grounding capability for MLLMs. This work establishes a scalable, low-intervention paradigm for enhancing spatial reasoning in multimodal foundation models.

Technology Category

Application Category

๐Ÿ“ Abstract
Spatial awareness is key to enable embodied multimodal AI systems. Yet, without vast amounts of spatial supervision, current Multimodal Large Language Models (MLLMs) struggle at this task. In this paper, we introduce TWIST&SCOUT, a framework that equips pre-trained MLLMs with visual grounding ability without forgetting their existing image and language understanding skills. To this end, we propose TWIST, a twin-expert stepwise tuning module that modifies the decoder of the language model using one frozen module pre-trained on image understanding tasks and another learnable one for visual grounding tasks. This allows the MLLM to retain previously learned knowledge and skills, while acquiring what is missing. To fine-tune the model effectively, we generate a high-quality synthetic dataset we call SCOUT, which mimics human reasoning in visual grounding. This dataset provides rich supervision signals, describing a step-by-step multimodal reasoning process, thereby simplifying the task of visual grounding. We evaluate our approach on several standard benchmark datasets, encompassing grounded image captioning, zero-shot localization, and visual grounding tasks. Our method consistently delivers strong performance across all tasks, while retaining the pre-trained image understanding capabilities.
Problem

Research questions and friction points this paper is trying to address.

Enhance spatial awareness in Multimodal Large Language Models (MLLMs).
Prevent forgetting existing skills while adding visual grounding.
Generate synthetic dataset for effective visual grounding training.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Twin-expert stepwise tuning for visual grounding
Synthetic dataset SCOUT mimics human reasoning
Retains pre-trained skills while acquiring new ones
๐Ÿ”Ž Similar Papers
No similar papers found.