MMMS: Multi-Modal Multi-Surface Interactive Segmentation

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the Multi-Surface Interactive Segmentation (MMMS) task—precise mask generation for multiple highly adjacent or entangled surfaces within a single image, using only sparse user clicks. Methodologically, we propose the first systematic solution: a click-driven architecture supporting both RGB and non-RGB multimodal inputs, enabling efficient feature-level fusion of interaction signals while remaining compatible with off-the-shelf RGB backbone networks; we also introduce dedicated MMMS evaluation metrics. Key contributions include: (1) formal definition of the novel MMMS task; (2) construction of the first interactive segmentation framework supporting multiple surfaces, multimodal inputs, and minimal user clicks; and (3) state-of-the-art performance on DeLiVER and MFNet benchmarks—reducing mean Number-of-Clicks@90 by 1.28 and 1.19 per surface, respectively—with the RGB-only variant matching or surpassing prior art in single-mask scenarios.

Technology Category

Application Category

📝 Abstract
In this paper, we present a method to interactively create segmentation masks on the basis of user clicks. We pay particular attention to the segmentation of multiple surfaces that are simultaneously present in the same image. Since these surfaces may be heavily entangled and adjacent, we also present a novel extended evaluation metric that accounts for the challenges of this scenario. Additionally, the presented method is able to use multi-modal inputs to facilitate the segmentation task. At the center of this method is a network architecture which takes as input an RGB image, a number of non-RGB modalities, an erroneous mask, and encoded clicks. Based on this input, the network predicts an improved segmentation mask. We design our architecture such that it adheres to two conditions: (1) The RGB backbone is only available as a black-box. (2) To reduce the response time, we want our model to integrate the interaction-specific information after the image feature extraction and the multi-modal fusion. We refer to the overall task as Multi-Modal Multi-Surface interactive segmentation (MMMS). We are able to show the effectiveness of our multi-modal fusion strategy. Using additional modalities, our system reduces the NoC@90 by up to 1.28 clicks per surface on average on DeLiVER and up to 1.19 on MFNet. On top of this, we are able to show that our RGB-only baseline achieves competitive, and in some cases even superior performance when tested in a classical, single-mask interactive segmentation scenario.
Problem

Research questions and friction points this paper is trying to address.

Interactive segmentation of multiple entangled surfaces in images
Multi-modal input integration for improved segmentation accuracy
Novel evaluation metric for multi-surface segmentation challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Network integrates RGB, non-RGB modalities and user clicks
Architecture fuses multi-modal data after feature extraction
Uses erroneous mask and encoded clicks for improved segmentation
🔎 Similar Papers
No similar papers found.