RADSeg: Unleashing Parameter and Compute Efficient Zero-Shot Open-Vocabulary Segmentation Using Agglomerative Models

πŸ“… 2025-11-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current open-vocabulary semantic segmentation (OVSS) methods suffer from weak generalization and high computational overhead: supervised approaches are constrained by labeled data, while zero-shot methods rely on multi-model ensembles with 850–1,350M parameters and inefficient inference. This work presents the first systematic exploration of the aggregation-based vision foundation model RADIO for zero-shot OVSS. We propose a lightweight, three-module architecture comprising recursive self-attention, global self-attention aggregation, and efficient mask optimization. Integrated into a ViT backbone, our method achieves consistent mIoU gains of 6–30% over prior zero-shot approaches, accelerates inference by 3.95Γ—, and reduces parameter count to just 105Mβ€”2.5Γ— fewer than state-of-the-art ensemble modelsβ€”while surpassing their performance at significantly lower computational cost.

Technology Category

Application Category

πŸ“ Abstract
Open-vocabulary semantic segmentation (OVSS) underpins many vision and robotics tasks that require generalizable semantic understanding. Existing approaches either rely on limited segmentation training data, which hinders generalization, or apply zero-shot heuristics to vision-language models (e.g CLIP), while the most competitive approaches combine multiple models to improve performance at the cost of high computational and memory demands. In this work, we leverage an overlooked agglomerative vision foundation model, RADIO, to improve zero-shot OVSS along three key axes simultaneously: mIoU, latency, and parameter efficiency. We present the first comprehensive study of RADIO for zero-shot OVSS and enhance its performance through self-correlating recursive attention, self-correlating global aggregation, and computationally efficient mask refinement. Our approach, RADSeg, achieves 6-30% mIoU improvement in the base ViT class while being 3.95x faster and using 2.5x fewer parameters. Surprisingly, RADSeg-base (105M) outperforms previous combinations of huge vision models (850-1350M) in mIoU, achieving state-of-the-art accuracy with substantially lower computational and memory cost.
Problem

Research questions and friction points this paper is trying to address.

Improves zero-shot open-vocabulary segmentation accuracy and efficiency
Reduces computational latency and parameter requirements for segmentation
Enhances generalization without expensive multi-model combinations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses agglomerative vision foundation model RADIO
Enhances performance via self-correlating recursive attention
Achieves efficiency with fewer parameters and faster latency
πŸ”Ž Similar Papers
2024-06-23International Conference on Learning RepresentationsCitations: 3