Stress-Testing Assumptions: A Guide to Bayesian Sensitivity Analyses in Causal Inference

πŸ“… 2026-02-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Causal inference relies on untestable identification assumptions, whose violation can lead to biased conclusions, yet existing Bayesian sensitivity analyses lack a unified framework and practical guidance. This work proposes a cohesive Bayesian framework grounded in the β€œmissing data” perspective, systematically integrating four types of sensitivity analyses: exposure misclassification, unmeasured confounding, and non-randomly missing outcomes (accommodating both parametric and nonparametric models). Implemented in Stan, the approach offers an open-source, reproducible modeling strategy that, combined with methodological exposition and illustrative examples, substantially lowers the barrier to application. For the first time, multiple sensitivity concerns are harmonized within a single framework, providing researchers with readily applicable tools to robustly assess the sensitivity of causal effect estimates to violations of key assumptions.

Technology Category

Application Category

πŸ“ Abstract
While observational data are routinely used to estimate causal effects of biomedical treatments, doing so requires special methods to adjust for observed confounding. These methods invariably rely on untestable statistical and causal identification assumptions. When these assumptions do not hold, sensitivity analysis methods can be used to characterize how different violations may change our inferences. The Bayesian approach to sensitivity analyses in causal inference has unique advantages as it allows users to encode subjective beliefs about the direction and magnitude of assumption violations via prior distributions and make inferences using the updated posterior. However, uptake of these methods remains low since implementation requires substantial methodological knowledge. Moreover, while implementation with publicly available software is possible, it is not straight-forward. At the same time, there are few papers that provide practical guidance on these fronts. In this paper, we walk through four examples of Bayesian sensitivity analyses: 1) exposure misclassification, 2) unmeasured confounding, and missing not-at-random outcomes with 3) parametric and 4) nonparametric Bayesian models. We show how all of these can be done using a unified Bayesian"missing data"approach. We also cover implementation using Stan, a publicly available open-source software for fitting Bayesian models. To the best of our knowledge, this is the first paper that presents a unified approach with code, examples, and methodology in a three-pronged illustration of sensitivity analyses in Bayesian causal inference. Our goal is for the reader to walk away with implementation-level knowledge.
Problem

Research questions and friction points this paper is trying to address.

causal inference
Bayesian sensitivity analysis
untestable assumptions
observational data
confounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian sensitivity analysis
causal inference
unmeasured confounding
missing data framework
Stan implementation
πŸ”Ž Similar Papers
No similar papers found.