🤖 AI Summary
In competitive programming, problem statements exhibit high surface-level diversity despite sharing structurally similar solution logics; existing code generation and retrieval models rely excessively on superficial semantics and thus fail to capture deep solution equivalence. To address this, we propose SolveRank—the first solution-aware retrieval ranking model. It leverages a large language model (DeepSeek-R1) to generate logically equivalent yet lexically diverse problem variants as high-quality positive samples, validated for solution consistency by GPT-4o. Negative samples are constructed via BM25-based hard negatives and random sampling. A contrastive learning objective optimizes ranking performance. Evaluated on xCodeEval, SolveRank significantly outperforms state-of-the-art retrieval methods, achieving superior precision, recall, and code generation success rates—especially on challenging problems. Our work establishes a new paradigm for solution-oriented code retrieval and generation.
📝 Abstract
In competitive programming task, problem statements are often embedded within elaborate narrative backgrounds, requiring deep understanding of the underlying solutions to successfully complete the tasks. Current code generation models primarily focus on token-level semantic modeling, highly susceptible to distractions from irrelevant narrative statements. Inspired by RAG, retrieving reference code with similar solutions may help enhance model performance on difficult problems. However, existing retrieval models also emphasize surface-level semantic similarity, neglecting the deeper solution-level logical similarities that are critical in competitive programming. Therefore, designing ranking models capable of accurately identifying and retrieving problems and corresponding codes remains an urgent research problem in competitive code generation. In this paper, we propose SolveRank, a solution-aware ranking model empowered by synthetic data for competitive programming tasks. Specifically, we leverage the DeepSeek-R1 model to generate logically equivalent but differently phrased new problems, verified by GPT-4o for solution consistency. Then, we train SolveRank with these as positive samples and BM25/random-retrieved problems as negatives. During inference, SolveRank retrieves relevant problems and corresponding code from the corpus to assist a downstream code generator. Experiments on the xCodeEval dataset demonstrate that SolveRank outperforms SOTA ranking methods in precision and recall metrics, and boosts code generation performance for difficult problems.