🤖 AI Summary
This work addresses a critical limitation in existing reasoning-based segmentation methods within reinforcement learning: the inability to discern whether the reasoning process genuinely focuses on the target region, often resulting in verbose and ineffective reasoning chains. To overcome this, the authors propose Discriminative Perceptual Anchor Descriptions (DPAD), a novel approach that introduces descriptive captions of the target region to enable semantic contrastive learning between the target and its surrounding context. This mechanism guides the model to attend to the distinctive attributes of the target, thereby anchoring and enhancing the interpretability of the reasoning process. By integrating multimodal large language models, reinforcement learning, and caption generation, DPAD achieves a 3.09% improvement in cIoU on the ReasonSeg benchmark while reducing reasoning chain length by approximately 42%.
📝 Abstract
Reasoning segmentation increasingly employs reinforcement learning to generate explanatory reasoning chains that guide Multimodal Large Language Models. While these geometric rewards are primarily confined to guiding the final localization, they are incapable of discriminating whether the reasoning process remains anchored on the referred region or strays into irrelevant context. Lacking this discriminative guidance, the model's reasoning often devolves into unfocused and verbose chains that ultimately fail to disambiguate and perceive the target in complex scenes. This suggests a need to complement the RL objective with Discriminative Perception, an ability to actively distinguish a target from its context. To realize this, we propose DPAD to compel the model to generate a descriptive caption of the referred object, which is then used to explicitly discriminate by contrasting the caption's semantic relevance to the referred object against the wider context. By optimizing for this discriminative capability, the model is forced to focus on the unique attributes of the target, leading to a more converged and efficient reasoning chain. The descriptive caption also serves as an interpretability rationale that aligns with the segmentation. Experiments on the benchmarks confirm the validity of our approach, delivering substantial performance gains, with the cIoU on ReasonSeg increasing by 3.09% and the reasoning chain length decreasing by approximately 42%. Code is available at https://github.com/mrazhou/DPAD