Sensitivity Analysis for Causal ML: A Use Case at Booking.com

๐Ÿ“… 2025-10-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Causal inference relies on the untestable no-unmeasured-confounding assumption; its violation induces estimation bias. This paper conducts a systematic sensitivity analysis for causal machine learning in an industrial setting (Booking.com), marking the first real-world application of Chernozhukov et al.โ€™s nonparametric bias-bound method to quantify the impact range of unmeasured confounding on causal effect estimates. The method is compatible with mainstream causal machine learning models and imposes no strong functional-form assumptions. Experiments demonstrate that the framework effectively assesses estimation robustness, substantially enhancing the credibility of causal conclusions, and provides business decision-makers with interpretable bias tolerance thresholds. This work fills a critical gap in the systematic industrial practice of sensitivity analysis and advances causal inference from point estimation toward robust inference.

Technology Category

Application Category

๐Ÿ“ Abstract
Causal Machine Learning has emerged as a powerful tool for flexibly estimating causal effects from observational data in both industry and academia. However, causal inference from observational data relies on untestable assumptions about the data-generating process, such as the absence of unobserved confounders. When these assumptions are violated, causal effect estimates may become biased, undermining the validity of research findings. In these contexts, sensitivity analysis plays a crucial role, by enabling data scientists to assess the robustness of their findings to plausible violations of unconfoundedness. This paper introduces sensitivity analysis and demonstrates its practical relevance through a (simulated) data example based on a use case at Booking.com. We focus our presentation on a recently proposed method by Chernozhukov et al. (2023), which derives general non-parametric bounds on biases due to omitted variables, and is fully compatible with (though not limited to) modern inferential tools of Causal Machine Learning. By presenting this use case, we aim to raise awareness of sensitivity analysis and highlight its importance in real-world scenarios.
Problem

Research questions and friction points this paper is trying to address.

Assessing causal effect robustness to unobserved confounders
Quantifying bias from omitted variables in causal ML
Validating causal inferences when assumptions are violated
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sensitivity analysis assesses causal ML robustness
Non-parametric bounds address omitted variable biases
Method compatible with modern causal ML tools
๐Ÿ”Ž Similar Papers
No similar papers found.