LDIR: Low-Dimensional Dense and Interpretable Text Embeddings with Relative Representations

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text embedding faces a fundamental trade-off between interpretability and compactness: dense methods (e.g., SimCSE) achieve strong performance but lack interpretability; sparse approaches (e.g., bag-of-words) are interpretable yet underperform; while emerging LLM-based interpretable methods offer semantic transparency, they suffer from prohibitively high dimensionality (>10,000). This paper proposes the first low-dimensional (<500D), dense, and fine-grained interpretable text embedding framework. Our core innovation is constructing an anchor text set via farthest-point sampling and defining each dimension’s semantics as the relative similarity between input text and corresponding anchor text—enabling traceable, human-understandable interpretations within a dense vector space. Empirically, our method matches SimCSE’s performance on semantic similarity, retrieval, and clustering tasks, substantially outperforms high-dimensional interpretable baselines, and achieves >95% dimensionality reduction. Code is publicly available.

Technology Category

Application Category

📝 Abstract
Semantic text representation is a fundamental task in the field of natural language processing. Existing text embedding (e.g., SimCSE and LLM2Vec) have demonstrated excellent performance, but the values of each dimension are difficult to trace and interpret. Bag-of-words, as classic sparse interpretable embeddings, suffers from poor performance. Recently, Benara et al. (2024) propose interpretable text embeddings using large language models, which forms"0/1"embeddings based on responses to a series of questions. These interpretable text embeddings are typically high-dimensional (larger than 10,000). In this work, we propose Low-dimensional (lower than 500) Dense and Interpretable text embeddings with Relative representations (LDIR). The numerical values of its dimensions indicate semantic relatedness to different anchor texts through farthest point sampling, offering both semantic representation as well as a certain level of traceability and interpretability. We validate LDIR on multiple semantic textual similarity, retrieval, and clustering tasks. Extensive experimental results show that LDIR performs close to the black-box baseline models and outperforms the interpretable embeddings baselines with much fewer dimensions. Code is available at https://github.com/szu-tera/LDIR.
Problem

Research questions and friction points this paper is trying to address.

Existing text embeddings lack interpretability and traceability
Bag-of-words embeddings have poor performance despite interpretability
High-dimensional interpretable embeddings are computationally inefficient
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-dimensional dense interpretable text embeddings
Relative representations with anchor texts
Farthest point sampling for semantic relatedness
🔎 Similar Papers
No similar papers found.
Yile Wang
Yile Wang
Shenzhen University
Natural Language Processing
Z
Zhanyu Shen
College of Computer Science and Software Engineering, Shenzhen University
H
Hui Huang
College of Computer Science and Software Engineering, Shenzhen University