🤖 AI Summary
Text embeddings are often confounded by spurious attributes—such as language identity or corpus origin—that impair cross-corpus similarity computation and clustering. To address this, we propose Linear Concept Erasure (LCE), a least-squares-based post-processing method that learns an orthogonal projection operator to explicitly remove information associated with known confounders, thereby disentangling content-relevant dimensions from non-content ones. LCE operates without fine-tuning the pretrained embedding model, preserving its original generalization capability. Evaluated on multilingual and multi-corpus benchmarks, LCE consistently improves similarity retrieval and clustering performance, yielding an average +3.2% F1 gain. Crucially, it maintains robustness under out-of-distribution conditions, demonstrating both effectiveness and practical utility for confounder-robust representation learning.
📝 Abstract
Embedding-based similarity metrics between text sequences can be influenced not just by the content dimensions we most care about, but can also be biased by spurious attributes like the text's source or language. These document confounders cause problems for many applications, but especially those that need to pool texts from different corpora. This paper shows that a debiasing algorithm that removes information about observed confounders from the encoder representations substantially reduces these biases at a minimal computational cost. Document similarity and clustering metrics improve across every embedding variant and task we evaluate -- often dramatically. Interestingly, performance on out-of-distribution benchmarks is not impacted, indicating that the embeddings are not otherwise degraded.