🤖 AI Summary
In observational studies with multiple treatments and outcomes, unobserved confounding impedes valid causal inference. Existing approaches rely on instrumental variables or factor models, while Hill’s specificity criterion lacks formal causal definition and quantitative implementation. This paper proposes the first nonparametric identification framework grounded in causal specificity: it formally defines the causal specificity assumption and constructs an estimable, quantitative specificity measure; enables joint exposure-wide and outcome-wide causal discovery without instrumental variables or independent structural assumptions; and incorporates sensitivity analysis to assess robustness of key assumptions. The framework not only strengthens the theoretical foundation of Hill’s specificity criterion but also offers novel solutions to invalid instruments in Mendelian randomization and selection bias in missing-data settings. By integrating specificity-driven identification with rigorous sensitivity evaluation, it significantly enhances reliability and interpretability in complex association analyses. (149 words)
📝 Abstract
Hill's specificity criterion has been highly influential in biomedical and epidemiological research. However, it remains controversial and its application often relies on subjective and qualitative analysis without a comprehensive and rigorous causal theory. Focusing on unmeasured confounding adjustment with multiple treatments and multiple outcomes, this paper develops a formal and quantitative framework for leveraging specificity for causal inference in observational studies. The proposed framework introduces a causal specificity assumption, a quantitative measure of specificity, a hypothesis testing procedure, and identification and estimation strategies. Identification under a nonparametric outcome model is established. The causal specificity assumption concerns only the breadth of causal associations, in contrast to Hill's specificity that concerns observed associations and to existing confounding adjustment methods that rely on auxiliary variables (e.g., instrumental variables, negative controls) or independence structures (e.g., factor models). A sensitivity analysis procedure is proposed to assess robustness of the test against violations of this assumption. This framework is particularly suited to exposure- and outcome-wide studies, where joint causal discovery across multiple treatments and multiple outcomes is of interest. It also offers a potential tool for addressing other sources of bias, such as invalid instruments in Mendelian randomization and selection bias in missing data analysis.