Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision

📅 2024-10-10
🏛️ arXiv.org
📈 Citations: 5
Influential: 2
📄 PDF
🤖 AI Summary
Existing large multimodal models (LMMs) rely on explicit pixel-level supervision for visual grounding, limiting their generalization. Method: We empirically discover and validate that LMMs can spontaneously emerge pixel-level visual grounding capabilities under weak supervision—i.e., without any grounding annotations—enabling a novel “Attend-and-Segment” paradigm that leverages self-attention maps for zero-supervision pixel segmentation. To enhance generalization and scalability, we propose DIFFLMM, which replaces the CLIP visual encoder with a diffusion-based visual encoder. Contribution/Results: Our approach achieves a 44.2% grounding mask recall without any grounding supervision—surpassing the fully supervised GLaMM model—and delivers competitive performance on both visual grounding and general-purpose VQA benchmarks. This work establishes a new weakly supervised paradigm for vision-language understanding.

Technology Category

Application Category

📝 Abstract
Current large multimodal models (LMMs) face challenges in grounding, which requires the model to relate language components to visual entities. Contrary to the common practice that fine-tunes LMMs with additional grounding supervision, we find that the grounding ability can in fact emerge in LMMs trained without explicit grounding supervision. To reveal this emerging grounding, we introduce an"attend-and-segment"method which leverages attention maps from standard LMMs to perform pixel-level segmentation. Furthermore, to enhance the grounding ability, we propose DIFFLMM, an LMM utilizing a diffusion-based visual encoder, as opposed to the standard CLIP visual encoder, and trained with the same weak supervision. Without being constrained by the biases and limited scale of grounding-specific supervision data, our approach is more generalizable and scalable. We achieve competitive performance on both grounding-specific and general visual question answering benchmarks, compared with grounding LMMs and generalist LMMs, respectively. Notably, we achieve a 44.2 grounding mask recall on grounded conversation generation without any grounding supervision, outperforming the extensively supervised model GLaMM. Project page: https://groundLMM.github.io.
Problem

Research questions and friction points this paper is trying to address.

Emerging visual grounding without explicit supervision
Leveraging attention maps for pixel-level segmentation
Enhancing grounding via diffusion-based visual encoder
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attend-and-segment method using attention maps
DIFFLMM employs diffusion-based visual encoder
Training without explicit grounding supervision
🔎 Similar Papers
No similar papers found.