🤖 AI Summary
This work addresses the challenge in multilingual dense retrieval where embedding vectors often conflate semantic content with language identity cues, leading to intra-language bias and suppressed cross-lingual relevance. The authors propose a post-processing approach based on sparse autoencoders that—without modifying the original model or re-encoding texts—precisely identifies and suppresses latent units associated with language identity. This enables controlled removal of language-specific signals while reconstructing embeddings in their original dimensionality, ensuring compatibility with existing vector databases. To the best of our knowledge, this is the first method to effectively disentangle language and semantic information in embedding space through post-processing alone. The technique significantly enhances cross-lingual retrieval performance, particularly for language pairs with divergent writing systems, all while preserving the original index structure.
📝 Abstract
Dense retrieval in multilingual settings often searches over mixed-language collections, yet multilingual embeddings encode language identity alongside semantics. This language signal can inflate similarity for same-language pairs and crowd out relevant evidence written in other languages. We propose LANGSAE EDITING, a post-hoc sparse autoencoder trained on pooled embeddings that enables controllable removal of language-identity signal directly in vector space. The method identifies language-associated latent units using cross-language activation statistics, suppresses these units at inference time, and reconstructs embeddings in the original dimensionality, making it compatible with existing vector databases without retraining the base encoder or re-encoding raw text. Experiments across multiple languages show consistent improvements in ranking quality and cross-language coverage, with especially strong gains for script-distinct languages.