🤖 AI Summary
Empirical software engineering studies frequently suffer from causal inference bias due to omitted confounding variables, undermining internal validity. This paper introduces causal structural modeling—systematically and for the first time—to this domain, proposing a pre-analysis framework comprising bias identification, sensitivity quantification, and counterfactual evaluation, along with an actionable workflow for bias diagnosis and research design optimization. Applied in two industrial-scale case studies, the approach successfully identified and quantified omitted-variable bias; subsequent design refinements substantially mitigated validity threats. The core contribution is establishing pre-study causal modeling as a critical practice for enhancing reliability and reproducibility in non-experimental software engineering research, thereby filling a methodological gap in ensuring causal validity within empirical software engineering.
📝 Abstract
Omitted variable bias occurs when a statistical model leaves out variables that are relevant determinants of the effects under study. This results in the model attributing the missing variables' effect to some of the included variables -- hence over- or under-estimating the latter's true effect. Omitted variable bias presents a significant threat to the validity of empirical research, particularly in non-experimental studies such as those prevalent in empirical software engineering. This paper illustrates the impact of omitted variable bias on two case studies in the software engineering domain, and uses them to present methods to investigate the possible presence of omitted variable bias, to estimate its impact, and to mitigate its drawbacks. The analysis techniques we present are based on causal structural models of the variables of interest, which provide a practical, intuitive summary of the key relations among variables. This paper demonstrates a sequence of analysis steps that inform the design and execution of any empirical study in software engineering. An important observation is that it pays off to invest effort investigating omitted variable bias before actually executing an empirical study, because this effort can lead to a more solid study design, and to a significant reduction in its threats to validity.