🤖 AI Summary
Existing code retrieval methods, constrained by dense embedding models, struggle to effectively bridge the semantic gap between natural language queries and code, while also facing challenges such as multilingual support, long code sequences, and subword fragmentation. This work proposes SPLADE-Code—the first large-scale learnable sparse retrieval (LSR) model family tailored for code retrieval, ranging from 0.6 to 8 billion parameters. Through lightweight single-stage training, SPLADE-Code integrates subword processing with semantic expansion to generate expanded tokens that jointly optimize lexical and semantic matching. On the MTEB Code benchmark, its sub-billion-parameter variant achieves a state-of-the-art score of 75.4, while the 8-billion-parameter model reaches 79.0, both enabling sub-millisecond retrieval latency over million-scale codebases.
📝 Abstract
Retrieval over large codebases is a key component of modern LLM-based software engineering systems. Existing approaches predominantly rely on dense embedding models, while learned sparse retrieval (LSR) remains largely unexplored for code. However, applying sparse retrieval to code is challenging due to subword fragmentation, semantic gaps between natural-language queries and code, diversity of programming languages and sub-tasks, and the length of code documents, which can harm sparsity and latency. We introduce SPLADE-Code, the first large-scale family of learned sparse retrieval models specialized for code retrieval (600M-8B parameters). Despite a lightweight one-stage training pipeline, SPLADE-Code achieves state-of-the-art performance among retrievers under 1B parameters (75.4 on MTEB Code) and competitive results at larger scales (79.0 with 8B). We show that learned expansion tokens are critical to bridge lexical and semantic matching, and provide a latency analysis showing that LSR enables sub-millisecond retrieval on a 1M-passage collection with little effectiveness loss.