LANGSAE EDITING: Improving Multilingual Information Retrieval via Post-hoc Language Identity Removal

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in multilingual dense retrieval where embedding vectors often conflate semantic content with language identity cues, leading to intra-language bias and suppressed cross-lingual relevance. The authors propose a post-processing approach based on sparse autoencoders that—without modifying the original model or re-encoding texts—precisely identifies and suppresses latent units associated with language identity. This enables controlled removal of language-specific signals while reconstructing embeddings in their original dimensionality, ensuring compatibility with existing vector databases. To the best of our knowledge, this is the first method to effectively disentangle language and semantic information in embedding space through post-processing alone. The technique significantly enhances cross-lingual retrieval performance, particularly for language pairs with divergent writing systems, all while preserving the original index structure.

Technology Category

Application Category

📝 Abstract
Dense retrieval in multilingual settings often searches over mixed-language collections, yet multilingual embeddings encode language identity alongside semantics. This language signal can inflate similarity for same-language pairs and crowd out relevant evidence written in other languages. We propose LANGSAE EDITING, a post-hoc sparse autoencoder trained on pooled embeddings that enables controllable removal of language-identity signal directly in vector space. The method identifies language-associated latent units using cross-language activation statistics, suppresses these units at inference time, and reconstructs embeddings in the original dimensionality, making it compatible with existing vector databases without retraining the base encoder or re-encoding raw text. Experiments across multiple languages show consistent improvements in ranking quality and cross-language coverage, with especially strong gains for script-distinct languages.
Problem

Research questions and friction points this paper is trying to address.

multilingual information retrieval
language identity
dense retrieval
cross-language retrieval
embedding bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

language identity removal
sparse autoencoder
multilingual retrieval
post-hoc editing
cross-language retrieval
🔎 Similar Papers
No similar papers found.
Dongjun Kim
Dongjun Kim
Stanford University
Machine LearningArtificial Intelligence
J
Jeongho Yoon
Department of Computer Science and Engineering, Korea University
Chanjun Park
Chanjun Park
Assistant Professor at Soongsil University
Natural Language ProcessingLarge Language ModelsMachine Translation
H
Heuiseok Lim
Department of Computer Science and Engineering, Korea University