🤖 AI Summary
This work systematically evaluates large language models’ (LLMs) algorithmic problem-solving capabilities on real ICPC World Finals competition problems. Method: We propose LLM-ProS, a novel evaluation paradigm, and introduce the first benchmark dataset comprising 166 authentic ICPC problems. It incorporates response calibration, contamination analysis, and chain-of-thought (CoT) attribution to isolate reasoning fidelity from data leakage. Evaluation employs multi-dimensional metrics—including accuracy, token/latency cost, and CoT quality—to assess algorithmic reasoning, solution correctness, and computational efficiency. Contribution/Results: Experiments reveal that o1-preview and GPT-4o achieve top performance on complex problems, yet all models exhibit severely limited generalization to unseen problem types. Data contamination and insufficient reasoning depth emerge as primary bottlenecks. Our framework establishes a reproducible, attributable methodology for rigorous assessment of LLMs’ algorithmic reasoning capacity.
📝 Abstract
The rapid advancement of large language models has opened new avenues for automating complex problem-solving tasks such as algorithmic coding and competitive programming. This paper introduces a novel evaluation technique, LLM-ProS, to assess the performance of state-of-the-art LLMs on International Collegiate Programming Contest (ICPC) problems. Using a curated dataset of 166 World Finals problems from 2011 to 2024, we benchmark the models' reasoning, accuracy, and efficiency. We evaluate the five models-GPT-4o, Mistral Large, Llama-3.1-405B, and the o1 family, consisting of o1-mini and o1-preview, across critical metrics like correctness, resource utilization, and response calibration. Our results reveal significant differences in the models' abilities to generalize, adapt, and solve novel problems. We also investigated the impact of training methodologies, dataset contamination, and chain-of-thought reasoning on model performance. The findings provide new insights into optimizing LLMs for algorithmic tasks, highlighting both strengths and limitations of current models.