Learning Sparse Visual Representations via Spatial-Semantic Factorization

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inherent tension in self-supervised learning between semantic understanding and image reconstruction, where high-level semantic approaches often sacrifice spatial structure while generative methods lack abstraction capacity. To reconcile these objectives, the authors propose a spatial-semantic factorization mechanism that disentangles visual features into a product of low-rank semantic concepts and their spatial distributions. Using only 16 sparse tokens, this approach simultaneously achieves high-fidelity reconstruction (ImageNet FID = 2.60) and strong semantic representation (79.10% accuracy). The method integrates low-rank decomposition, DINO-style augmentation-aligned learning, and a localization matrix, and is validated within the STELLAR framework, demonstrating the unified efficacy of sparse representations for both discriminative and generative tasks.

Technology Category

Application Category

📝 Abstract
Self-supervised learning (SSL) faces a fundamental conflict between semantic understanding and image reconstruction. High-level semantic SSL (e.g., DINO) relies on global tokens that are forced to be location-invariant for augmentation alignment, a process that inherently discards the spatial coordinates required for reconstruction. Conversely, generative SSL (e.g., MAE) preserves dense feature grids for reconstruction but fails to produce high-level abstractions. We introduce STELLAR, a framework that resolves this tension by factorizing visual features into a low-rank product of semantic concepts and their spatial distributions. This disentanglement allows us to perform DINO-style augmentation alignment on the semantic tokens while maintaining the precise spatial mapping in the localization matrix necessary for pixel-level reconstruction. We demonstrate that as few as 16 sparse tokens under this factorized form are sufficient to simultaneously support high-quality reconstruction (2.60 FID) and match the semantic performance of dense backbones (79.10% ImageNet accuracy). Our results highlight STELLAR as a versatile sparse representation that bridges the gap between discriminative and generative vision by strategically separating semantic identity from spatial geometry. Code available at https://aka.ms/stellar.
Problem

Research questions and friction points this paper is trying to address.

self-supervised learning
semantic understanding
image reconstruction
spatial representation
visual representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

spatial-semantic factorization
sparse visual representation
self-supervised learning
disentangled representation
low-rank factorization
🔎 Similar Papers
No similar papers found.