🤖 AI Summary
Scientific machine learning (SciML) lacks a unified benchmarking framework, leading to fragmented cross-domain evaluation and poor reproducibility.
Method: We introduce the first standardized ontology-based benchmark for SciML, spanning physics, chemistry, materials science, biology, and climate science. It establishes a unified taxonomy and a six-dimensional quality scoring system. An open submission mechanism enables dynamic benchmark expansion and recognition of emerging computational paradigms. Leveraging the MLCommons ecosystem, we integrate existing frameworks—including XAI-BENCH, PDEBench, and SciMLBench—via ontology-driven modeling and community-coordinated governance.
Contribution/Results: Our work achieves systematic curation, tiered evaluation, and cross-domain interoperability of multimodal scientific benchmarks. The accompanying open-source platform supports continuous evolution, delivering a reproducible, extensible, and comparable assessment infrastructure for the SciML research community.
📝 Abstract
Scientific machine learning research spans diverse domains and data modalities, yet existing benchmark efforts remain siloed and lack standardization. This makes novel and transformative applications of machine learning to critical scientific use-cases more fragmented and less clear in pathways to impact. This paper introduces an ontology for scientific benchmarking developed through a unified, community-driven effort that extends the MLCommons ecosystem to cover physics, chemistry, materials science, biology, climate science, and more. Building on prior initiatives such as XAI-BENCH, FastML Science Benchmarks, PDEBench, and the SciMLBench framework, our effort consolidates a large set of disparate benchmarks and frameworks into a single taxonomy of scientific, application, and system-level benchmarks. New benchmarks can be added through an open submission workflow coordinated by the MLCommons Science Working Group and evaluated against a six-category rating rubric that promotes and identifies high-quality benchmarks, enabling stakeholders to select benchmarks that meet their specific needs. The architecture is extensible, supporting future scientific and AI/ML motifs, and we discuss methods for identifying emerging computing patterns for unique scientific workloads. The MLCommons Science Benchmarks Ontology provides a standardized, scalable foundation for reproducible, cross-domain benchmarking in scientific machine learning. A companion webpage for this work has also been developed as the effort evolves: https://mlcommons-science.github.io/benchmark/