π€ AI Summary
This work addresses the significant societal and spurious biases embedded in vision-language models like CLIP, which arise from uncurated training data and are difficult to disentangle from semantics using existing post-hoc debiasing methods in dense embedding spaces. To overcome this limitation, the authors propose Sparse Embedding Modulation (SEM), a novel framework that introduces sparse autoencoders (SAEs) into vision-language model debiasing for the first time. By decomposing CLIPβs text embeddings into a sparse latent space, SEM precisely identifies and modulates bias-associated neurons, enabling nonlinear, zero-shot post-processing intervention. Evaluated across four benchmark datasets and two CLIP backbones, the method substantially improves fairness while preserving semantic fidelity, thereby surpassing the constraints of conventional dense-space debiasing approaches.
π Abstract
Models that bridge vision and language, such as CLIP, are key components of multimodal AI, yet their large-scale, uncurated training data introduce severe social and spurious biases. Existing post-hoc debiasing methods often operate directly in the dense CLIP embedding space, where bias and task-relevant information are highly entangled. This entanglement limits their ability to remove bias without degrading semantic fidelity. In this work, we propose Sparse Embedding Modulation (SEM), a post-hoc, zero-shot debiasing framework that operates in a Sparse Autoencoder (SAE) latent space. By decomposing CLIP text embeddings into disentangled features, SEM identifies and modulates bias-relevant neurons while preserving query-relevant ones. This enables more precise, non-linear interventions. Across four benchmark datasets and two CLIP backbones, SEM achieves substantial fairness gains in retrieval and zero-shot classification. Our results demonstrate that sparse latent representations provide an effective foundation for post-hoc debiasing of vision-language models.