Interpreting Video Representations with Spatio-Temporal Sparse Autoencoders

📅 2026-04-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard sparse autoencoders compromise temporal consistency in video representation learning, leading to unstable inter-frame features. This work proposes a novel approach that integrates spatiotemporal contrastive learning with a Matryoshka hierarchical grouping mechanism, significantly enhancing temporal coherence while preserving feature interpretability. By introducing an adjustable contrastive loss weight, the method effectively balances reconstruction fidelity and temporal stability, and further uncovers artifact issues in unisemanticity metrics caused by backbone misalignment. Experimental results demonstrate that the proposed method improves action classification accuracy by 3.9%, achieves up to a 2.8× gain in text-to-video retrieval R@1, and substantially strengthens temporal autocorrelation of learned features.
📝 Abstract
We present the first systematic study of Sparse Autoencoders (SAEs) on video representations. Standard SAEs decompose video into interpretable, monosemantic features but destroy temporal coherence: hard TopK selection produces unstable feature assignments across frames, reducing autocorrelation by 36%. We propose spatio-temporal contrastive objectives and Matryoshka hierarchical grouping that recover and even exceed raw temporal coherence. The contrastive loss weight controls a tunable trade-off between reconstruction and temporal coherence. A systematic ablation on two backbones and two datasets shows that different configurations excel at different goals: reconstruction fidelity, temporal coherence, action discrimination, or interpretability. Contrastive SAE features improve action classification by +3.9% over raw features and text-video retrieval by up to 2.8xR@1. A cross-backbone analysis reveals that standard monosemanticity metrics contain a backbone-alignment artifact: both DINOv2 and VideoMAE produce equally monosemantic features under neutral (CLIP) similarity. Causal ablation confirms that contrastive training concentrates predictive signal into a small number of identifiable features.
Problem

Research questions and friction points this paper is trying to address.

Sparse Autoencoders
Video Representations
Temporal Coherence
Monosemantic Features
Interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Autoencoders
Spatio-Temporal Contrastive Learning
Temporal Coherence
Monosemantic Features
Video Representation
🔎 Similar Papers
2024-02-20International Conference on Machine LearningCitations: 30