SEM: Sparse Embedding Modulation for Post-Hoc Debiasing of Vision-Language Models

πŸ“… 2026-03-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the significant societal and spurious biases embedded in vision-language models like CLIP, which arise from uncurated training data and are difficult to disentangle from semantics using existing post-hoc debiasing methods in dense embedding spaces. To overcome this limitation, the authors propose Sparse Embedding Modulation (SEM), a novel framework that introduces sparse autoencoders (SAEs) into vision-language model debiasing for the first time. By decomposing CLIP’s text embeddings into a sparse latent space, SEM precisely identifies and modulates bias-associated neurons, enabling nonlinear, zero-shot post-processing intervention. Evaluated across four benchmark datasets and two CLIP backbones, the method substantially improves fairness while preserving semantic fidelity, thereby surpassing the constraints of conventional dense-space debiasing approaches.

Technology Category

Application Category

πŸ“ Abstract
Models that bridge vision and language, such as CLIP, are key components of multimodal AI, yet their large-scale, uncurated training data introduce severe social and spurious biases. Existing post-hoc debiasing methods often operate directly in the dense CLIP embedding space, where bias and task-relevant information are highly entangled. This entanglement limits their ability to remove bias without degrading semantic fidelity. In this work, we propose Sparse Embedding Modulation (SEM), a post-hoc, zero-shot debiasing framework that operates in a Sparse Autoencoder (SAE) latent space. By decomposing CLIP text embeddings into disentangled features, SEM identifies and modulates bias-relevant neurons while preserving query-relevant ones. This enables more precise, non-linear interventions. Across four benchmark datasets and two CLIP backbones, SEM achieves substantial fairness gains in retrieval and zero-shot classification. Our results demonstrate that sparse latent representations provide an effective foundation for post-hoc debiasing of vision-language models.
Problem

Research questions and friction points this paper is trying to address.

vision-language models
bias
post-hoc debiasing
embedding space
semantic fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Embedding Modulation
post-hoc debiasing
Sparse Autoencoder
vision-language models
disentangled representations
πŸ”Ž Similar Papers
No similar papers found.