π€ AI Summary
Current open-vocabulary semantic segmentation (OVSS) methods suffer from weak generalization and high computational overhead: supervised approaches are constrained by labeled data, while zero-shot methods rely on multi-model ensembles with 850β1,350M parameters and inefficient inference. This work presents the first systematic exploration of the aggregation-based vision foundation model RADIO for zero-shot OVSS. We propose a lightweight, three-module architecture comprising recursive self-attention, global self-attention aggregation, and efficient mask optimization. Integrated into a ViT backbone, our method achieves consistent mIoU gains of 6β30% over prior zero-shot approaches, accelerates inference by 3.95Γ, and reduces parameter count to just 105Mβ2.5Γ fewer than state-of-the-art ensemble modelsβwhile surpassing their performance at significantly lower computational cost.
π Abstract
Open-vocabulary semantic segmentation (OVSS) underpins many vision and robotics tasks that require generalizable semantic understanding. Existing approaches either rely on limited segmentation training data, which hinders generalization, or apply zero-shot heuristics to vision-language models (e.g CLIP), while the most competitive approaches combine multiple models to improve performance at the cost of high computational and memory demands. In this work, we leverage an overlooked agglomerative vision foundation model, RADIO, to improve zero-shot OVSS along three key axes simultaneously: mIoU, latency, and parameter efficiency. We present the first comprehensive study of RADIO for zero-shot OVSS and enhance its performance through self-correlating recursive attention, self-correlating global aggregation, and computationally efficient mask refinement. Our approach, RADSeg, achieves 6-30% mIoU improvement in the base ViT class while being 3.95x faster and using 2.5x fewer parameters. Surprisingly, RADSeg-base (105M) outperforms previous combinations of huge vision models (850-1350M) in mIoU, achieving state-of-the-art accuracy with substantially lower computational and memory cost.