Does the Tool Matter? Exploring Some Causes of Threats to Validity in Mining Software Repositories

📅 2025-01-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Implementation discrepancies across software repository mining tools severely threaten the validity of empirical findings. Method: We conduct a dual-tool comparative analysis of 10 large-scale open-source projects, systematically identifying how minor implementation differences—such as commit parsing logic and author deduplication rules—induce up to 500% deviation in key metrics (e.g., commit count, developer count). We propose a “tool-level configuration + post-hoc normalization” co-optimization framework to mitigate metric divergence and perform multi-tool experiments, quantitative consistency assessment, and code-level root-cause analysis. Contribution/Results: We identify six technical sources undermining data validity and establish the first validity assessment paradigm for Mining Software Projects Research (MSPR) explicitly addressing tool heterogeneity—thereby enabling rigorous, reproducible, and comparable empirical software engineering studies.

Technology Category

Application Category

📝 Abstract
Software repositories are an essential source of information for software engineering research on topics such as project evolution and developer collaboration. Appropriate mining tools and analysis pipelines are therefore an indispensable precondition for many research activities. Ideally, valid results should not depend on technical details of data collection and processing. It is, however, widely acknowledged that mining pipelines are complex, with a multitude of implementation decisions made by tool authors based on their interests and assumptions. This raises the questions if (and to what extent) tools agree on their results and are interchangeable. In this study, we use two tools to extract and analyse ten large software projects, quantitatively and qualitatively comparing results and derived data to better understand this concern. We analyse discrepancies from a technical point of view, and adjust code and parametrisation to minimise replication differences. Our results indicate that despite similar trends, even simple metrics such as the numbers of commits and developers may differ by up to 500%. We find that such substantial differences are often caused by minor technical details. We show how tool-level and data post-processing changes can overcome these issues, but find they may require considerable efforts. We summarise identified causes in our lessons learned to help researchers and practitioners avoid common pitfalls, and reflect on implementation decisions and their influence in ensuring obtained data meets explicit and implicit expectations. Our findings lead us to hypothesise that similar uncertainties exist in other analysis tools, which may limit the validity of conclusions drawn in tool-centric research.
Problem

Research questions and friction points this paper is trying to address.

Software Engineering
Data Analysis Variability
Research Reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Software Engineering Research
Analysis Tool Variability
Methodological Adjustments
🔎 Similar Papers