Decomposing multimodal embedding spaces with group-sparse autoencoders

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard sparse autoencoders tend to learn “split dictionaries” in multimodal embedding spaces, where features activate exclusively for a single modality, thereby disrupting cross-modal semantic alignment. To address this issue, this work proposes the first autoencoder framework that integrates group sparsity regularization with cross-modal random masking, explicitly promoting cross-modal consistency within multimodal embedding spaces such as those of CLIP or CLAP. The proposed approach effectively mitigates modality splitting, substantially reduces the occurrence of dead neurons, and enhances the semantic meaningfulness, cross-modal alignment, interpretability, and controllability of the learned features in multimodal tasks.

Technology Category

Application Category

📝 Abstract
The Linear Representation Hypothesis asserts that the embeddings learned by neural networks can be understood as linear combinations of features corresponding to high-level concepts. Based on this ansatz, sparse autoencoders (SAEs) have recently become a popular method for decomposing embeddings into a sparse combination of linear directions, which have been shown empirically to often correspond to human-interpretable semantics. However, recent attempts to apply SAEs to multimodal embedding spaces (such as the popular CLIP embeddings for image/text data) have found that SAEs often learn"split dictionaries", where most of the learned sparse features are essentially unimodal, active only for data of a single modality. In this work, we study how to effectively adapt SAEs for the setting of multimodal embeddings while ensuring multimodal alignment. We first argue that the existence of a split dictionary decomposition on an aligned embedding space implies the existence of a non-split dictionary with improved modality alignment. Then, we propose a new SAE-based approach to multimodal embedding decomposition using cross-modal random masking and group-sparse regularization. We apply our method to popular embeddings for image/text (CLIP) and audio/text (CLAP) data and show that, compared to standard SAEs, our approach learns a more multimodal dictionary while reducing the number of dead neurons and improving feature semanticity. We finally demonstrate how this improvement in alignment of concepts between modalities can enable improvements in the interpretability and control of cross-modal tasks.
Problem

Research questions and friction points this paper is trying to address.

multimodal embeddings
sparse autoencoders
split dictionaries
modality alignment
interpretable semantics
Innovation

Methods, ideas, or system contributions that make the work stand out.

group-sparse autoencoders
multimodal embedding decomposition
cross-modal alignment
sparse coding
interpretable representations
🔎 Similar Papers
No similar papers found.
C
Chiraag Kaushik
School of Electrical and Computer Engineering, Georgia Institute of Technology
D
Davis Barch
Dolby Laboratories
Andrea Fanelli
Andrea Fanelli
Principal Researcher at Dolby Laboratories
Multimodal AIAudio AIMachine PerceptionBiomedical Signal ProcessingWearable Devices