Measuring the State of Open Science in Transportation Using Large Language Models

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of systematic and scalable methods for evaluating open science practices in transportation research. It presents the first application of large language models (LLMs) to this domain, developing an automated pipeline to efficiently identify data and code sharing across 10,724 papers published in Transportation Research journals from 2019 to 2024. By integrating human-annotated validation with inter-rater reliability analysis, the approach achieves both scalability and contextual accuracy, overcoming limitations of purely manual or bibliometric methods. The findings reveal that only approximately 3% of papers share both data and code, and that such openness is not significantly associated with higher citation rates or faster peer review, highlighting a critical gap in current incentive structures for open science adoption.

Technology Category

Application Category

📝 Abstract
Open science initiatives have strengthened scientific integrity and accelerated research progress across many fields, but the state of their practice within transportation research remains under-investigated. Key features of open science, defined here as data and code availability, are difficult to extract due to the inherent complexity of the field. Previous work has either been limited to small-scale studies due to the labor-intensive nature of manual analysis or has relied on large-scale bibliometric approaches that sacrifice contextual richness. This paper introduces an automatic and scalable feature-extraction pipeline to measure data and code availability in transportation research. We employ Large Language Models (LLMs) for this task and validate their performance against a manually curated dataset and through an inter-rater agreement analysis. We applied this pipeline to examine 10,724 research articles published in the Transportation Research Part series of journals between 2019 and 2024. Our analysis found that only 5% of quantitative papers shared a code repository, 4% of quantitative papers shared a data repository, and about 3% of papers shared both, with trends differing across journals, topics, and geographic regions. We found no significant difference in citation counts or review duration between papers that provided data and code and those that did not, suggesting a misalignment between open science efforts and traditional academic metrics. Consequently, encouraging these practices will likely require structural interventions from journals and funding agencies to supplement the lack of direct author incentives. The pipeline developed in this study can be readily scaled to other journals, representing a critical step toward the automated measurement and monitoring of open science practices in transportation research.
Problem

Research questions and friction points this paper is trying to address.

open science
transportation research
data availability
code availability
research reproducibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
Open Science
Automated Feature Extraction
Data and Code Availability
Transportation Research
🔎 Similar Papers
No similar papers found.