🤖 AI Summary
In clinical trials, interventions—such as compensation or intensive follow-up—artificially inflate treatment adherence, inducing generalization bias and undermining external validity to real-world target populations. This paper introduces the first systematic causal framework modeling trial participation as a perturbation to adherence, formalized via conditional sensitivity parameters. Under missing adherence data in the target population, we propose a doubly robust first-order estimator that accommodates machine learning–enhanced propensity score and outcome modeling. The method balances interpretability with flexibility. Applied to opioid use disorder, it successfully extrapolates relapse risk estimates for two pharmacotherapies from randomized trials to real-world populations, delivering both bounding intervals and distributional treatment effect estimates. This substantially improves the reliability and robustness of evidence translation for health policy.
📝 Abstract
Randomized clinical trials are considered the gold standard for informing treatment guidelines, but results may not generalize to real-world populations. Generalizability is hindered by distributional differences in baseline covariates and treatment-outcome mediators. Approaches to address differences in covariates are well established, but approaches to address differences in mediators are more limited. Here we consider the setting where trial activities that differ from usual care settings (e.g., monetary compensation, follow-up visits frequency) affect treatment adherence. When treatment and adherence data are unavailable for the real-world target population, we cannot identify the mean outcome under a specific treatment assignment (i.e., mean potential outcome) in the target. Therefore, we propose a sensitivity analysis in which a parameter for the relative difference in adherence to a specific treatment between the trial and the target, possibly conditional on covariates, must be specified. We discuss options for specification of the sensitivity analysis parameter based on external knowledge including setting a range to estimate bounds or specifying a probability distribution from which to repeatedly draw parameter values (i.e., use Monte Carlo sampling). We introduce two estimators for the mean counterfactual outcome in the target that incorporates this sensitivity parameter, a plug-in estimator and a one-step estimator that is double robust and supports the use of machine learning for estimating nuisance models. Finally, we apply the proposed approach to the motivating application where we transport the risk of relapse under two different medications for the treatment of opioid use disorder from a trial to a real-world population.