🤖 AI Summary
This study investigates the impact of embedding dimensionality on performance in dense retrieval and its limitations as task complexity increases. Through systematic experiments across models of varying scales, the work presents the first empirical evidence that retrieval performance follows a power-law relationship with embedding dimensionality. Building on this observation, the authors propose predictable scaling laws based solely on dimensionality or jointly on model size. Using dense retrieval architectures, approximate nearest neighbor search, and large-scale comparative evaluations, they demonstrate that in task-aligned scenarios, performance improves with higher dimensionality—albeit with diminishing returns—whereas in misaligned tasks, excessive dimensions degrade performance. These findings offer both theoretical grounding and practical guidance for selecting optimal embedding dimensions in efficient retrieval systems.
📝 Abstract
Dense retrieval, which encodes queries and documents into a single dense vector, has become the dominant neural retrieval approach due to its simplicity and compatibility with fast approximate nearest neighbor algorithms. As the tasks dense retrieval performs grow in complexity, the fundamental limitations of the underlying data structure and similarity metric -- namely vectors and inner-products -- become more apparent. Prior recent work has shown theoretical limitations inherent to single vectors and inner-products that are generally tied to the embedding dimension. Given the importance of embedding dimension for retrieval capacity, understanding how dense retrieval performance changes as embedding dimension is scaled is fundamental to building next generation retrieval models that balance effectiveness and efficiency. In this work, we conduct a comprehensive analysis of the relationship between embedding dimension and retrieval performance. Our experiments include two model families and a range of model sizes from each to construct a detailed picture of embedding scaling behavior. We find that the scaling behavior fits a power law, allowing us to derive scaling laws for performance given only embedding dimension, as well as a joint law accounting for embedding dimension and model size. Our analysis shows that for evaluation tasks aligned with the training task, performance continues to improve as embedding size increases, though with diminishing returns. For evaluation data that is less aligned with the training task, we find that performance is less predictable, with performance degrading with larger embedding dimensions for certain tasks. We hope our work provides additional insight into the limitations of embeddings and their behavior as well as offers a practical guide for selecting model and embedding dimension to achieve optimal performance with reduced storage and compute costs.