🤖 AI Summary
This work addresses a key limitation in existing unsupervised object-centric learning methods, which rely solely on the final-layer features of Vision Transformers (ViTs) and thereby overlook the rich semantic information embedded in intermediate layers, constraining segmentation performance. To overcome this, we propose MUFASA—a lightweight, plug-and-play framework that, for the first time, applies slot attention in parallel across multiple ViT encoder layers and integrates the resulting slot representations through a cross-layer slot fusion strategy to construct a unified object-centric representation. MUFASA is fully compatible with current unsupervised object-centric learning paradigms, achieving state-of-the-art segmentation performance across multiple benchmarks while accelerating training convergence and introducing only minimal inference overhead.
📝 Abstract
Unsupervised object-centric learning (OCL) decomposes visual scenes into distinct entities. Slot attention is a popular approach that represents individual objects as latent vectors, called slots. Current methods obtain these slot representations solely from the last layer of a pre-trained vision transformer (ViT), ignoring valuable, semantically rich information encoded across the other layers. To better utilize this latent semantic information, we introduce MUFASA, a lightweight plug-and-play framework for slot attention-based approaches to unsupervised object segmentation. Our model computes slot attention across multiple feature layers of the ViT encoder, fully leveraging their semantic richness. We propose a fusion strategy to aggregate slots obtained on multiple layers into a unified object-centric representation. Integrating MUFASA into existing OCL methods improves their segmentation results across multiple datasets, setting a new state of the art while simultaneously improving training convergence with only minor inference overhead.