MMR: A Large-scale Benchmark Dataset for Multi-target and Multi-granularity Reasoning Segmentation

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing referring segmentation datasets focus on single-object, object-level understanding and lack support for fine-grained semantic reasoning over multiple objects and their constituent parts. Method: We introduce the Multi-Object Multi-Granularity Referring Segmentation (MMR) task and present the first large-scale benchmark—comprising 194K implicit instructions—that jointly supports object- and part-level recognition and cross-object relational modeling. We formally define the task paradigm, propose hierarchical annotation protocols and layered prompt engineering, and design a lightweight multi-head decoding head for end-to-end joint segmentation. Results: Experiments on MMR reveal a >32% accuracy drop in part-level recognition for current state-of-the-art models, confirming the benchmark’s diagnostic utility. Our method significantly outperforms baselines, establishing a new foundation for fine-grained, embodied visual-language reasoning.

Technology Category

Application Category

📝 Abstract
The fusion of Large Language Models with vision models is pioneering new possibilities in user-interactive vision-language tasks. A notable application is reasoning segmentation, where models generate pixel-level segmentation masks by comprehending implicit meanings in human instructions. However, seamless human-AI interaction demands more than just object-level recognition; it requires understanding both objects and the functions of their detailed parts, particularly in multi-target scenarios. For example, when instructing a robot to extit{turn on the TV"}, there could be various ways to accomplish this command. Recognizing multiple objects capable of turning on the TV, such as the TV itself or a remote control (multi-target), provides more flexible options and aids in finding the optimized scenario. Furthermore, understanding specific parts of these objects, like the TV's button or the remote's button (part-level), is important for completing the action. Unfortunately, current reasoning segmentation datasets predominantly focus on a single target object-level reasoning, which limits the detailed recognition of an object's parts in multi-target contexts. To address this gap, we construct a large-scale dataset called Multi-target and Multi-granularity Reasoning (MMR). MMR comprises 194K complex and implicit instructions that consider multi-target, object-level, and part-level aspects, based on pre-existing image-mask sets. This dataset supports diverse and context-aware interactions by hierarchically providing object and part information. Moreover, we propose a straightforward yet effective framework for multi-target, object-level, and part-level reasoning segmentation. Experimental results on MMR show that the proposed method can reason effectively in multi-target and multi-granularity scenarios, while the existing reasoning segmentation model still has room for improvement.
Problem

Research questions and friction points this paper is trying to address.

Addresses multi-target and multi-granularity reasoning segmentation challenges.
Enhances understanding of object parts in multi-target scenarios.
Introduces a large-scale dataset for complex, context-aware interactions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses Large Language Models with vision models
Introduces Multi-target and Multi-granularity Reasoning dataset
Proposes framework for multi-target, object-level, part-level segmentation
🔎 Similar Papers
No similar papers found.
Donggon Jang
Donggon Jang
KAIST
Computer VisionDeep Learning
Y
Yucheol Cho
Department of Electrical Engineering, KAIST
S
Suin Lee
Department of Electrical Engineering, KAIST
T
Taehyeon Kim
Department of Electrical Engineering, KAIST
Dae-Shik Kim
Dae-Shik Kim
KAIST
NeuroscienceMRIBrain ImagingNeuro-RoboticsNeuromorphic Engineering