SlotMatch: Distilling Temporally Consistent Object-Centric Representations for Unsupervised Video Segmentation

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In unsupervised video segmentation, the absence of supervision signals and scene complexity often lead to overly parameterized models with high computational overhead. To address this, we propose SlotMatch—a knowledge distillation framework that transfers temporally consistent object-centric representations from a teacher (SlotContrast) to a lightweight student model without auxiliary losses or additional supervision. Crucially, alignment is achieved solely via cosine similarity between slot embeddings, eliminating the need for explicit alignment objectives, which our analysis shows are redundant. Theoretically grounded and empirically validated, SlotMatch significantly simplifies the distillation pipeline. On two standard benchmarks, the distilled student model reduces parameters by 3.6× and accelerates inference by 1.9×, while matching or even surpassing the teacher’s segmentation accuracy—and consistently outperforming existing unsupervised approaches.

Technology Category

Application Category

📝 Abstract
Unsupervised video segmentation is a challenging computer vision task, especially due to the lack of supervisory signals coupled with the complexity of visual scenes. To overcome this challenge, state-of-the-art models based on slot attention often have to rely on large and computationally expensive neural architectures. To this end, we propose a simple knowledge distillation framework that effectively transfers object-centric representations to a lightweight student. The proposed framework, called SlotMatch, aligns corresponding teacher and student slots via the cosine similarity, requiring no additional distillation objectives or auxiliary supervision. The simplicity of SlotMatch is confirmed via theoretical and empirical evidence, both indicating that integrating additional losses is redundant. We conduct experiments on two datasets to compare the state-of-the-art teacher model, SlotContrast, with our distilled student. The results show that our student based on SlotMatch matches and even outperforms its teacher, while using 3.6x less parameters and running 1.9x faster. Moreover, our student surpasses previous unsupervised video segmentation models.
Problem

Research questions and friction points this paper is trying to address.

Unsupervised video segmentation lacks supervisory signals
Existing models require large, expensive neural architectures
Lightweight student model needs effective knowledge distillation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge distillation for lightweight student model
Cosine similarity aligns teacher and student slots
No additional distillation objectives needed
🔎 Similar Papers
No similar papers found.