🤖 AI Summary
Existing sensitivity analysis methods—such as design sensitivity and Bahadur–Rosenbaum efficiency—are restricted to binary treatments, and direct extension or dichotomization for non-binary (continuous, ordinal, or multicategorical) treatments induces severe bias. No unified framework exists for robustness assessment under such settings.
Method: We propose the first general sensitivity analysis framework for non-binary treatments, introducing a treatment-agnostic universal sensitivity parameter. We rigorously derive closed-form asymptotic expressions for design sensitivity and efficiency applicable to arbitrary treatment types, and develop a data-adaptive multi-statistic fusion strategy.
Results: We theoretically prove that dichotomization fails in sensitivity analysis. Extensive simulations and empirical studies demonstrate that our framework substantially improves robustness to unmeasured confounding and enhances statistical efficiency. It provides a principled, generalizable tool for quantifying robustness in causal inference with non-binary treatments.
📝 Abstract
To ensure reliable causal conclusions from observational (i.e., non-randomized) studies, researchers routinely conduct sensitivity analysis to assess robustness to hidden bias due to unmeasured confounding. In matched observational studies (one of the most widely used observational study designs), two foundational concepts, design sensitivity and Bahadur-Rosenbaum efficiency, are used to quantify the robustness of test statistics and study designs in sensitivity analyses. Unfortunately, these measures of robustness are not developed for non-binary treatments (e.g., continuous or ordinal treatments) and consequently, prevailing recommendations about robust tests may be misleading. In this work, we provide a unified framework to quantify robustness of test statistics and study designs that are agnostic to treatment types. We first present a negative result about a popular, ad-hoc approach based on dichotomizing the treatment variable. Next, we introduce a universal, nearly sufficient sensitivity parameter that is agnostic to the underlying treatment type. We then generalize and derive all-in-one formulas for design sensitivity and Bahadur-Rosenbaum efficiency that can be used for any treatment type. We also propose a general data-adaptive approach to combine candidate test statistics to enhance robustness against unmeasured confounding. Extensive simulation studies and a data application illustrate our proposed framework. For practice, our results yield new, previously undiscovered insights about the robustness of tests and study designs in matched observational studies, especially when investigators are faced with non-binary treatment.sed sensitivity analysis for the binary treatment case, built on the generalized Rosenbaum sensitivity bounds and large-scale mixed integer programming.