🤖 AI Summary
Standard sparse autoencoders tend to learn “split dictionaries” in multimodal embedding spaces, where features activate exclusively for a single modality, thereby disrupting cross-modal semantic alignment. To address this issue, this work proposes the first autoencoder framework that integrates group sparsity regularization with cross-modal random masking, explicitly promoting cross-modal consistency within multimodal embedding spaces such as those of CLIP or CLAP. The proposed approach effectively mitigates modality splitting, substantially reduces the occurrence of dead neurons, and enhances the semantic meaningfulness, cross-modal alignment, interpretability, and controllability of the learned features in multimodal tasks.
📝 Abstract
The Linear Representation Hypothesis asserts that the embeddings learned by neural networks can be understood as linear combinations of features corresponding to high-level concepts. Based on this ansatz, sparse autoencoders (SAEs) have recently become a popular method for decomposing embeddings into a sparse combination of linear directions, which have been shown empirically to often correspond to human-interpretable semantics. However, recent attempts to apply SAEs to multimodal embedding spaces (such as the popular CLIP embeddings for image/text data) have found that SAEs often learn"split dictionaries", where most of the learned sparse features are essentially unimodal, active only for data of a single modality. In this work, we study how to effectively adapt SAEs for the setting of multimodal embeddings while ensuring multimodal alignment. We first argue that the existence of a split dictionary decomposition on an aligned embedding space implies the existence of a non-split dictionary with improved modality alignment. Then, we propose a new SAE-based approach to multimodal embedding decomposition using cross-modal random masking and group-sparse regularization. We apply our method to popular embeddings for image/text (CLIP) and audio/text (CLAP) data and show that, compared to standard SAEs, our approach learns a more multimodal dictionary while reducing the number of dead neurons and improving feature semanticity. We finally demonstrate how this improvement in alignment of concepts between modalities can enable improvements in the interpretability and control of cross-modal tasks.