🤖 AI Summary
Interactive segmentation (IS) faces an inherent trade-off between accuracy and CPU inference efficiency: dense-token methods achieve high accuracy but suffer from slow inference, whereas sparse-prompt approaches like SAM enable fast inference yet lose fine boundary details. To address this, we propose Dynamic-IS—a novel framework featuring dynamic hybrid attention. It introduces Dynamic Prompt Embedding (DPE), Dynamic Hybrid Attention (DHA), a Hybrid Mixture-of-Experts feed-forward network (HMoE), and Dynamic Local Upsampling (DLU), jointly optimizing fine-grained boundary modeling and efficient computation in non-boundary regions. By seamlessly integrating full attention with BSQ-based sparse attention, Dynamic-IS enables CPU-friendly parallelism and adaptive token routing. On mainstream IS benchmarks, it achieves state-of-the-art accuracy while accelerating CPU inference by 3.2× over dense-token baselines—outperforming SAM and other methods significantly. Notably, Dynamic-IS is the first IS framework to support real-time CPU deployment without sacrificing sub-pixel boundary precision.
📝 Abstract
Interactive segmentation (IS) improves annotation efficiency by segmenting target regions from user prompts, with widespread applications in real-world scenarios. Current approaches face a critical trade-off: dense-token methods achieve superior accuracy and detail preservation but suffer from prohibitively slow processing on CPU devices, while the Segment Anything Model (SAM) advances the field with sparse prompt tokens for fast inference but compromises segmentation quality. In this paper, we propose Inter2Former to address this challenge by optimizing computation allocation in dense-token processing, which introduces four key enhancements. First, we propose Dynamic Prompt Embedding (DPE) that adaptively processes only regions of interest while avoiding additional overhead from background tokens. Second, we introduce Dynamic Hybrid Attention (DHA), which leverages previous segmentation masks to route tokens through either full attention (O(N2)) for boundary regions or our proposed efficient BSQ attention (O(N)) for non-boundary regions. Third, we develop Hybrid Mixture of Experts (HMoE), which applies similar adaptive computation strategies in FFN modules with CPU-optimized parallel processing. Finally, we present Dynamic Local Upsampling (DLU), a reverse operation of DPE, which localizes objects with a lightweight MLP and performs fine-grained upsampling only in detected regions. Experimental results on high-precision IS benchmarks demonstrate that Inter2Former achieves SOTA performance with high efficiency on CPU devices.