🤖 AI Summary
The scientific machine learning (SciML) community has long suffered from a lack of standardized, objective benchmarks, resulting in weak baselines, inconsistent evaluation protocols, and poor reproducibility. To address this, we propose the first general-purpose Common Task Framework (CTF) for SciML, encompassing three core capabilities: prediction, state reconstruction, and generalization. CTF integrates canonical chaotic systems—including Kuramoto–Sivashinsky and Lorenz equations—and introduces multidimensional evaluation tasks such as noise-robust modeling, few-shot learning, and time-series forecasting. It further incorporates hidden test sets and real-world challenge competitions to ensure rigorous, unbiased assessment. CTF substantially enhances the rigor and fairness of cross-algorithm comparison and has already uncovered method-specific performance boundaries across diverse scientific tasks. Complementing the framework, a global sea surface temperature competition dataset will be publicly released, fostering community-wide adoption of a unified evaluation paradigm.
📝 Abstract
Machine learning (ML) is transforming modeling and control in the physical, engineering, and biological sciences. However, rapid development has outpaced the creation of standardized, objective benchmarks - leading to weak baselines, reporting bias, and inconsistent evaluations across methods. This undermines reproducibility, misguides resource allocation, and obscures scientific progress. To address this, we propose a Common Task Framework (CTF) for scientific machine learning. The CTF features a curated set of datasets and task-specific metrics spanning forecasting, state reconstruction, and generalization under realistic constraints, including noise and limited data. Inspired by the success of CTFs in fields like natural language processing and computer vision, our framework provides a structured, rigorous foundation for head-to-head evaluation of diverse algorithms. As a first step, we benchmark methods on two canonical nonlinear systems: Kuramoto-Sivashinsky and Lorenz. These results illustrate the utility of the CTF in revealing method strengths, limitations, and suitability for specific classes of problems and diverse objectives. Next, we are launching a competition around a global real world sea surface temperature dataset with a true holdout dataset to foster community engagement. Our long-term vision is to replace ad hoc comparisons with standardized evaluations on hidden test sets that raise the bar for rigor and reproducibility in scientific ML.