A Study of Scientific Computational Notebook Quality

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Scientific computing notebooks frequently suffer from irreproducibility, poor readability, and limited reusability, posing serious threats to research reliability. This work presents the first large-scale empirical study of 1,510 Jupyter notebooks from 518 code repositories published in Nature in 2024. Through manual reproduction attempts (only 2 successful out of 19), documentation review, code clone detection (≥10 lines, ≥3 instances), and mutation analysis, the study systematically uncovers pervasive issues including chaotic state management, missing dependencies, and excessive code duplication. To address these challenges, the authors propose the first multidimensional quality assessment framework explicitly designed to evaluate reproducibility, readability, and reusability, thereby establishing an empirical foundation and methodological support for improving the quality of scientific code.

Technology Category

Application Category

📝 Abstract
The quality of scientific code is a critical concern for the research community. Poorly written code can result in irreproducible results, incorrect findings, and slower scientific progress. In this study, we evaluate scientific code quality across three dimensions: reproducibility, readability, and reusability. We curated a corpus of 518 code repositories by analyzing Code Availability statements from all 1239 Nature publications in 2024. To assess code quality, we employed multiple methods, including manual attempts to reproduce Jupyter notebooks, documentation reviews, and analyses of code clones and mutation patterns. Our results reveal major challenges in scientific code quality. Of the 19 notebooks we attempted to execute, only two were reproducible, primarily due to missing data files and dependency issues. Code duplication was also common, with 326 clone classes of at least 10 lines and three instances found among 637 of the 1510 notebooks in our corpus. These duplications frequently involved tasks such as visualization, data processing, and statistical analysis. Moreover, our mutation analysis showed that scientific notebooks often exhibit tangled state changes, complicating comprehension and reasoning. The prevalence of these issues -- unreproducible code, widespread duplication, and tangled state management -- underscores the need for improved tools and abstractions to help science build reproducible, readable and reusable software.
Problem

Research questions and friction points this paper is trying to address.

reproducibility
code quality
scientific notebooks
code duplication
state management
Innovation

Methods, ideas, or system contributions that make the work stand out.

scientific code quality
computational notebooks
reproducibility
code clones
mutation analysis
🔎 Similar Papers
No similar papers found.
S
Shun Kashiwa
UC San Diego, USA
A
Ayla Kurdak
UC San Diego, USA
S
Savitha Ravi
UC San Diego, USA
R
Ridhi Srikanth
UC San Diego, USA
A
Angel Thakur
UC San Diego, USA
S
Sonia Chandra
UC San Diego, USA
J
Jonathan Truong
UC San Diego, USA
Michael Coblenz
Michael Coblenz
Assistant Professor, University of California San Diego
Programming LanguagesHuman Computer InteractionSoftware Engineering