π€ AI Summary
This study addresses the challenges of high visual polysemy and limited interpretability and controllability in vision-language models (VLMs). We introduce sparse autoencoders (SAEs) into the visual encoders of VLMs such as CLIP for the first time, establishing a monosemantic quantification framework. Methodologically, SAEs perform unsupervised disentanglement of visual features, achieving neuron-level semantic separation without modifying model parameters. Our contributions are threefold: (1) empirical identification of learnable monosemantic neurons in VLMs, whose activation patterns exhibit significant hierarchical alignment with expert taxonomies (e.g., iNaturalist); (2) a fine-tuning-free cross-modal controllable intervention mechanism; and (3) zero-parameter-modification output manipulation on models like LLaVA, improving image captioning accuracy by 12.7%. Collectively, this work establishes a novel paradigm for interpretable representation learning and controllable multimodal generation.
π Abstract
Sparse Autoencoders (SAEs) have recently been shown to enhance interpretability and steerability in Large Language Models (LLMs). In this work, we extend the application of SAEs to Vision-Language Models (VLMs), such as CLIP, and introduce a comprehensive framework for evaluating monosemanticity in vision representations. Our experimental results reveal that SAEs trained on VLMs significantly enhance the monosemanticity of individual neurons while also exhibiting hierarchical representations that align well with expert-defined structures (e.g., iNaturalist taxonomy). Most notably, we demonstrate that applying SAEs to intervene on a CLIP vision encoder, directly steer output from multimodal LLMs (e.g., LLaVA) without any modifications to the underlying model. These findings emphasize the practicality and efficacy of SAEs as an unsupervised approach for enhancing both the interpretability and control of VLMs.