FrontierScience: Evaluating AI's Ability to Perform Expert-Level Scientific Tasks

📅 2026-01-29
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
Existing scientific evaluation benchmarks predominantly rely on multiple-choice questions or established knowledge, making them inadequate for assessing expert-level reasoning capabilities of AI systems in cutting-edge scientific tasks. To address this gap, this work introduces FrontierScience, a novel benchmark comprising two tracks: one featuring International Olympiad-level problems and the other consisting of open-ended, doctoral-level research subtasks spanning frontier topics in physics, chemistry, and biology—such as quantum electrodynamics and synthetic organic chemistry. For the first time, the benchmark incorporates high-difficulty problems authored by Olympiad gold medalists and active scientists, and it employs a process-oriented, fine-grained scoring framework that moves beyond conventional answer-only evaluation. Comprising hundreds of high-quality questions—including 160 open-source “gold” items—the benchmark effectively discriminates among state-of-the-art models in advanced scientific reasoning.

Technology Category

Application Category

📝 Abstract
We introduce FrontierScience, a benchmark evaluating expert-level scientific reasoning in frontier language models. Recent model progress has nearly saturated existing science benchmarks, which often rely on multiple-choice knowledge questions or already published information. FrontierScience addresses this gap through two complementary tracks: (1) Olympiad, consisting of international olympiad problems at the level of IPhO, IChO, and IBO, and (2) Research, consisting of PhD-level, open-ended problems representative of sub-tasks in scientific research. FrontierScience contains several hundred questions (including 160 in the open-sourced gold set) covering subfields across physics, chemistry, and biology, from quantum electrodynamics to synthetic organic chemistry. All Olympiad problems are originally produced by international Olympiad medalists and national team coaches to ensure standards of difficulty, originality, and factuality. All Research problems are research sub-tasks written and verified by PhD scientists (doctoral candidates, postdoctoral researchers, or professors). For Research, we introduce a granular rubric-based evaluation framework to assess model capabilities throughout the process of solving a research task, rather than judging only a standalone final answer.
Problem

Research questions and friction points this paper is trying to address.

scientific reasoning
expert-level tasks
AI evaluation
science benchmarks
frontier models
Innovation

Methods, ideas, or system contributions that make the work stand out.

scientific reasoning
benchmark
rubric-based evaluation
expert-level tasks
language models
🔎 Similar Papers
No similar papers found.
Miles Wang
Miles Wang
Researcher, OpenAI
Artificial Intelligence
R
Robi Lin
OpenAI
K
Kat Hu
OpenAI
J
Joy Jiao
OpenAI
Neil Chowdhury
Neil Chowdhury
Transluce
E
Ethan Chang
OpenAI
T
Tejal Patwardhan
OpenAI