ResAgent: Entropy-based Prior Point Discovery and Visual Reasoning for Referring Expression Segmentation

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing referring expression segmentation methods, which rely on coarse bounding boxes generated by multimodal large language models, often resulting in redundant or insufficiently discriminative point prompts and struggling to disambiguate visually similar distractors through text-based coordinate reasoning alone. To overcome these challenges, we propose ResAgent, a novel framework that formulates spatial uncertainty as an information maximization problem. ResAgent employs entropy-guided discovery to identify highly informative point prompts and replaces purely textual coordinate reasoning with a vision–language aligned inference mechanism, enabling robust coarse-to-fine segmentation. Our approach achieves state-of-the-art performance across four benchmarks—RefCOCO, RefCOCO+, RefCOCOg, and ReasonSeg—demonstrating significant improvements in both segmentation accuracy and semantic consistency.

Technology Category

Application Category

📝 Abstract
Referring Expression Segmentation (RES) is a core vision-language segmentation task that enables pixel-level understanding of targets via free-form linguistic expressions, supporting critical applications such as human-robot interaction and augmented reality. Despite the progress of Multimodal Large Language Model (MLLM)-based approaches, existing RES methods still suffer from two key limitations: first, the coarse bounding boxes from MLLMs lead to redundant or non-discriminative point prompts; second, the prevalent reliance on textual coordinate reasoning is unreliable, as it fails to distinguish targets from visually similar distractors. To address these issues, we propose \textbf{\model}, a novel RES framework integrating \textbf{E}ntropy-\textbf{B}ased Point \textbf{D}iscovery (\textbf{EBD}) and \textbf{V}ision-\textbf{B}ased \textbf{R}easoning (\textbf{VBR}). Specifically, EBD identifies high-information candidate points by modeling spatial uncertainty within coarse bounding boxes, treating point selection as an information maximization process. VBR verifies point correctness through joint visual-semantic alignment, abandoning text-only coordinate inference for more robust validation. Built on these components, \model implements a coarse-to-fine workflow: bounding box initialization, entropy-guided point discovery, vision-based validation, and mask decoding. Extensive evaluations on four benchmark datasets (RefCOCO, RefCOCO+, RefCOCOg, and ReasonSeg) demonstrate that \model achieves new state-of-the-art performance across all four benchmarks, highlighting its effectiveness in generating accurate and semantically grounded segmentation masks with minimal prompts.
Problem

Research questions and friction points this paper is trying to address.

Referring Expression Segmentation
Multimodal Large Language Model
Point Prompt
Visual Reasoning
Semantic Grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Entropy-based Point Discovery
Vision-based Reasoning
Referring Expression Segmentation
Multimodal Large Language Models
Visual-Semantic Alignment
🔎 Similar Papers
No similar papers found.
Y
Yihao Wang
Sun Yat-sen University
J
Jusheng Zhang
Sun Yat-sen University
Z
Ziyi Tang
Sun Yat-sen University
K
Keze Wang
Sun Yat-sen University
Meng Yang
Meng Yang
Sun Yat-sen University, IEEE Senior Member
Multimodal perceptionEmbodied AIMachine Learning