🤖 AI Summary
Existing CS benchmarks predominantly focus on tasks with known optimal solutions, limiting their ability to assess model reasoning on open-ended, frontier problems. This work introduces FrontierCS—the first benchmark targeting computer science frontier tasks where optimal solutions are unknown but solution quality is quantifiable—comprising 156 algorithms (including NP-hard variants) and research-level open problems. Its methodological innovation lies in a tripartite evaluation framework integrating expert-designed problem specifications, partial credit scoring, and automated execution-based assessment, requiring models to generate executable code subject to multi-dimensional quality evaluation. Experiments reveal that state-of-the-art reasoning models significantly underperform human experts; scaling inference budget alone fails to close this gap; and models exhibit a systematic bias toward generating syntactically correct (“runnable-first”) rather than high-quality algorithmic solutions. These findings expose critical capability boundaries of current AI systems in authentic, cutting-edge CS tasks.
📝 Abstract
We introduce FrontierCS, a benchmark of 156 open-ended problems across diverse areas of computer science, designed and reviewed by experts, including CS PhDs and top-tier competitive programming participants and problem setters. Unlike existing benchmarks that focus on tasks with known optimal solutions, FrontierCS targets problems where the optimal solution is unknown, but the quality of a solution can be objectively evaluated. Models solve these tasks by implementing executable programs rather than outputting a direct answer. FrontierCS includes algorithmic problems, which are often NP-hard variants of competitive programming problems with objective partial scoring, and research problems with the same property. For each problem we provide an expert reference solution and an automatic evaluator. Combining open-ended design, measurable progress, and expert curation, FrontierCS provides a benchmark at the frontier of computer-science difficulty. Empirically, we find that frontier reasoning models still lag far behind human experts on both the algorithmic and research tracks, that increasing reasoning budgets alone does not close this gap, and that models often over-optimize for generating merely workable code instead of discovering high-quality algorithms and system designs.