RefAM: Attention Magnets for Zero-Shot Referral Segmentation

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing referring expression segmentation methods typically rely on fine-tuning or composing pre-trained models, necessitating additional training and architectural modifications. Method: We propose RefAM—the first training-free, architecture-agnostic zero-shot referring segmentation framework that directly leverages attention features and denoising scores from diffusion Transformers for image/video semantic localization. Contribution/Results: Our key insight is that stop words act as “attention attractors” in diffusion models, and we identify deep-layer global attention summits (GAS) as critical convergence points. Based on this, we design an attention redistribution strategy to suppress noise, enhance target activation, and improve cross-modal alignment via stop-word-guided background clustering. Evaluated on multiple zero-shot referring segmentation benchmarks, RefAM significantly outperforms prior approaches, achieving state-of-the-art performance without any parameter updates or model modifications.

Technology Category

Application Category

📝 Abstract
Most existing approaches to referring segmentation achieve strong performance only through fine-tuning or by composing multiple pre-trained models, often at the cost of additional training and architectural modifications. Meanwhile, large-scale generative diffusion models encode rich semantic information, making them attractive as general-purpose feature extractors. In this work, we introduce a new method that directly exploits features, attention scores, from diffusion transformers for downstream tasks, requiring neither architectural modifications nor additional training. To systematically evaluate these features, we extend benchmarks with vision-language grounding tasks spanning both images and videos. Our key insight is that stop words act as attention magnets: they accumulate surplus attention and can be filtered to reduce noise. Moreover, we identify global attention sinks (GAS) emerging in deeper layers and show that they can be safely suppressed or redirected onto auxiliary tokens, leading to sharper and more accurate grounding maps. We further propose an attention redistribution strategy, where appended stop words partition background activations into smaller clusters, yielding sharper and more localized heatmaps. Building on these findings, we develop RefAM, a simple training-free grounding framework that combines cross-attention maps, GAS handling, and redistribution. Across zero-shot referring image and video segmentation benchmarks, our approach consistently outperforms prior methods, establishing a new state of the art without fine-tuning or additional components.
Problem

Research questions and friction points this paper is trying to address.

Exploits diffusion transformer attention for zero-shot referral segmentation
Filters stop words and suppresses global attention sinks
Achieves state-of-the-art performance without training or modifications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion transformer attention without modifications
Filters stop words to reduce noise in attention
Suppresses global attention sinks for sharper maps
🔎 Similar Papers
No similar papers found.