Randomly Removing 50% of Dimensions in Text Embeddings has Minimal Impact on Retrieval and Classification Tasks

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the redundancy of dimensions in text embeddings—specifically, why randomly removing up to 50% of dimensions incurs only marginal performance degradation (<10%) on downstream tasks. Method: We conduct systematic random truncation experiments across six state-of-the-art text encoders on 26 classification and retrieval benchmarks, and extend analysis to token prediction in large language models. Contribution/Results: We discover that embedding spaces contain abundant, uniformly distributed redundant dimensions; certain truncations even improve performance. This redundancy is robust across retrieval, classification, and generative tasks. Our findings challenge the prevailing assumption that high-dimensional embeddings are necessarily utilized efficiently, and provide the first empirical evidence and mechanistic insight into the pervasive structural redundancy inherent in text representations. These results establish a theoretical foundation and empirical support for embedding compression, interpretability analysis, and efficient model deployment.

Technology Category

Application Category

📝 Abstract
In this paper, we study the surprising impact that truncating text embeddings has on downstream performance. We consistently observe across 6 state-of-the-art text encoders and 26 downstream tasks, that randomly removing up to 50% of embedding dimensions results in only a minor drop in performance, less than 10%, in retrieval and classification tasks. Given the benefits of using smaller-sized embeddings, as well as the potential insights about text encoding, we study this phenomenon and find that, contrary to what is suggested in prior work, this is not the result of an ineffective use of representation space. Instead, we find that a large number of uniformly distributed dimensions actually cause an increase in performance when removed. This would explain why, on average, removing a large number of embedding dimensions results in a marginal drop in performance. We make similar observations when truncating the embeddings used by large language models to make next-token predictions on generative tasks, suggesting that this phenomenon is not isolated to classification or retrieval tasks.
Problem

Research questions and friction points this paper is trying to address.

Studying the impact of randomly truncating text embeddings on performance
Investigating why removing embedding dimensions minimally affects tasks
Exploring if this phenomenon extends to generative tasks with LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Randomly removing 50% embedding dimensions
Minimal performance drop under 10%
Works across retrieval and classification tasks
S
Sotaro Takeshita
Data and Web Science Group, University of Mannheim, Germany
Y
Yurina Takeshita
Independent researcher
D
Daniel Ruffinelli
Data and Web Science Group, University of Mannheim, Germany
Simone Paolo Ponzetto
Simone Paolo Ponzetto
Professor of Information Systems, University of Mannheim
Artificial IntelligenceNatural Language ProcessingComputational Social Sciencedws@uma