🤖 AI Summary
This work investigates the redundancy of dimensions in text embeddings—specifically, why randomly removing up to 50% of dimensions incurs only marginal performance degradation (<10%) on downstream tasks. Method: We conduct systematic random truncation experiments across six state-of-the-art text encoders on 26 classification and retrieval benchmarks, and extend analysis to token prediction in large language models. Contribution/Results: We discover that embedding spaces contain abundant, uniformly distributed redundant dimensions; certain truncations even improve performance. This redundancy is robust across retrieval, classification, and generative tasks. Our findings challenge the prevailing assumption that high-dimensional embeddings are necessarily utilized efficiently, and provide the first empirical evidence and mechanistic insight into the pervasive structural redundancy inherent in text representations. These results establish a theoretical foundation and empirical support for embedding compression, interpretability analysis, and efficient model deployment.
📝 Abstract
In this paper, we study the surprising impact that truncating text embeddings has on downstream performance. We consistently observe across 6 state-of-the-art text encoders and 26 downstream tasks, that randomly removing up to 50% of embedding dimensions results in only a minor drop in performance, less than 10%, in retrieval and classification tasks. Given the benefits of using smaller-sized embeddings, as well as the potential insights about text encoding, we study this phenomenon and find that, contrary to what is suggested in prior work, this is not the result of an ineffective use of representation space. Instead, we find that a large number of uniformly distributed dimensions actually cause an increase in performance when removed. This would explain why, on average, removing a large number of embedding dimensions results in a marginal drop in performance. We make similar observations when truncating the embeddings used by large language models to make next-token predictions on generative tasks, suggesting that this phenomenon is not isolated to classification or retrieval tasks.