ICPC-Eval: Probing the Frontiers of LLM Reasoning with Competitive Programming Contests

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks (e.g., LiveCodeBench) and metrics (e.g., Pass@K) inadequately capture LLMs’ complex reasoning and iterative repair capabilities in realistic ICPC competition settings. Method: We introduce the first ICPC-oriented high-difficulty benchmark, comprising 118 authentic contest problems, and propose Refine@K—a novel execution-feedback-driven metric for evaluating multi-round code refinement. Our framework incorporates ICPC-distribution-aligned problem selection and difficulty calibration, automated test case generation, a local sandbox execution environment, and a feedback-driven multi-round evaluation paradigm. Contribution/Results: Experiments reveal that state-of-the-art reasoning models (e.g., DeepSeek-R1) still substantially underperform top human teams; Pass@K severely underestimates their true capability; and multi-round execution feedback is critical to unlocking latent reasoning potential. The benchmark and evaluation infrastructure are fully open-sourced to advance scientifically rigorous assessment of deep reasoning in LLMs.

Technology Category

Application Category

📝 Abstract
With the significant progress of large reasoning models in complex coding and reasoning tasks, existing benchmarks, like LiveCodeBench and CodeElo, are insufficient to evaluate the coding capabilities of large language models (LLMs) in real competition environments. Moreover, current evaluation metrics such as Pass@K fail to capture the reflective abilities of reasoning models. To address these challenges, we propose extbf{ICPC-Eval}, a top-level competitive coding benchmark designed to probing the frontiers of LLM reasoning. ICPC-Eval includes 118 carefully curated problems from 11 recent ICPC contests held in various regions of the world, offering three key contributions: 1) A challenging realistic ICPC competition scenario, featuring a problem type and difficulty distribution consistent with actual contests. 2) A robust test case generation method and a corresponding local evaluation toolkit, enabling efficient and accurate local evaluation. 3) An effective test-time scaling evaluation metric, Refine@K, which allows iterative repair of solutions based on execution feedback. The results underscore the significant challenge in evaluating complex reasoning abilities: top-tier reasoning models like DeepSeek-R1 often rely on multi-turn code feedback to fully unlock their in-context reasoning potential when compared to non-reasoning counterparts. Furthermore, despite recent advancements in code generation, these models still lag behind top-performing human teams. We release the benchmark at: https://github.com/RUCAIBox/Slow_Thinking_with_LLMs
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM coding skills in real competition settings
Assessing reflective reasoning abilities beyond Pass@K metrics
Bridging performance gap between models and human teams
Innovation

Methods, ideas, or system contributions that make the work stand out.

Realistic ICPC competition scenario design
Robust test case generation toolkit
Refine@K iterative repair metric
🔎 Similar Papers
No similar papers found.