Inter2Former: Dynamic Hybrid Attention for Efficient High-Precision Interactive

📅 2025-07-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Interactive segmentation (IS) faces an inherent trade-off between accuracy and CPU inference efficiency: dense-token methods achieve high accuracy but suffer from slow inference, whereas sparse-prompt approaches like SAM enable fast inference yet lose fine boundary details. To address this, we propose Dynamic-IS—a novel framework featuring dynamic hybrid attention. It introduces Dynamic Prompt Embedding (DPE), Dynamic Hybrid Attention (DHA), a Hybrid Mixture-of-Experts feed-forward network (HMoE), and Dynamic Local Upsampling (DLU), jointly optimizing fine-grained boundary modeling and efficient computation in non-boundary regions. By seamlessly integrating full attention with BSQ-based sparse attention, Dynamic-IS enables CPU-friendly parallelism and adaptive token routing. On mainstream IS benchmarks, it achieves state-of-the-art accuracy while accelerating CPU inference by 3.2× over dense-token baselines—outperforming SAM and other methods significantly. Notably, Dynamic-IS is the first IS framework to support real-time CPU deployment without sacrificing sub-pixel boundary precision.

Technology Category

Application Category

📝 Abstract
Interactive segmentation (IS) improves annotation efficiency by segmenting target regions from user prompts, with widespread applications in real-world scenarios. Current approaches face a critical trade-off: dense-token methods achieve superior accuracy and detail preservation but suffer from prohibitively slow processing on CPU devices, while the Segment Anything Model (SAM) advances the field with sparse prompt tokens for fast inference but compromises segmentation quality. In this paper, we propose Inter2Former to address this challenge by optimizing computation allocation in dense-token processing, which introduces four key enhancements. First, we propose Dynamic Prompt Embedding (DPE) that adaptively processes only regions of interest while avoiding additional overhead from background tokens. Second, we introduce Dynamic Hybrid Attention (DHA), which leverages previous segmentation masks to route tokens through either full attention (O(N2)) for boundary regions or our proposed efficient BSQ attention (O(N)) for non-boundary regions. Third, we develop Hybrid Mixture of Experts (HMoE), which applies similar adaptive computation strategies in FFN modules with CPU-optimized parallel processing. Finally, we present Dynamic Local Upsampling (DLU), a reverse operation of DPE, which localizes objects with a lightweight MLP and performs fine-grained upsampling only in detected regions. Experimental results on high-precision IS benchmarks demonstrate that Inter2Former achieves SOTA performance with high efficiency on CPU devices.
Problem

Research questions and friction points this paper is trying to address.

Balancing accuracy and speed in interactive segmentation tasks
Optimizing computation allocation for dense-token processing
Improving segmentation quality while maintaining CPU efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Prompt Embedding adaptively processes regions of interest
Dynamic Hybrid Attention routes tokens via full or efficient attention
Hybrid Mixture of Experts applies adaptive computation in FFN modules
🔎 Similar Papers
No similar papers found.
You Huang
You Huang
Xiamen University
segmentationinteractive segmentationtransformer
L
Lichao Chen
Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University
Jiayi Ji
Jiayi Ji
Rutgers University
L
Liujuan Cao
Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University
Shengchuan Zhang
Shengchuan Zhang
Xiamen University
computer visionmachine learning
R
Rongrong Ji
Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University