🤖 AI Summary
Existing code retrieval benchmarks overemphasize functional relevance while neglecting critical quality dimensions—correctness, efficiency, security, and maintainability. This work introduces CoQuIR, the first large-scale, multilingual, quality-aware code retrieval benchmark, covering 11 programming languages, 42K natural-language queries, and 135K code snippets, with fine-grained quality annotations across multiple dimensions. We formally define the role of code quality in retrieval and propose two novel quality-oriented evaluation metrics: Pairwise Preference Accuracy and Margin-based Ranking Score. Leveraging a multilingual quality annotation framework, synthetic perturbation-based data augmentation, and quality-aware fine-tuning, we conduct comprehensive benchmarking across 23 models. Results show that quality-aware training significantly improves quality-oriented metrics without degrading semantic relevance. Furthermore, downstream code generation experiments demonstrate enhanced reliability of generated outputs, validating CoQuIR’s utility for building robust, production-ready code intelligence systems.
📝 Abstract
Code retrieval is essential in modern software development, as it boosts code reuse and accelerates debugging. However, current benchmarks primarily emphasize functional relevance while neglecting critical dimensions of software quality. Motivated by this gap, we introduce CoQuIR, the first large-scale, multilingual benchmark specifically designed to evaluate quality-aware code retrieval across four key dimensions: correctness, efficiency, security, and maintainability. CoQuIR provides fine-grained quality annotations for 42,725 queries and 134,907 code snippets in 11 programming languages, and is accompanied by two quality-centric evaluation metrics: Pairwise Preference Accuracy and Margin-based Ranking Score. Using CoQuIR, we benchmark 23 retrieval models, covering both open-source and proprietary systems, and find that even top-performing models frequently fail to distinguish buggy or insecure code from their more robust counterparts. Furthermore, we conduct preliminary investigations into training methods that explicitly encourage retrievers to recognize code quality. Using synthetic datasets, we demonstrate promising improvements in quality-aware metrics across various models, without sacrificing semantic relevance. Downstream code generation experiments further validate the effectiveness of our approach. Overall, our work highlights the importance of integrating quality signals into code retrieval systems, laying the groundwork for more trustworthy and robust software development tools.