🤖 AI Summary
In vaccine efficacy (VE) trials, unmeasured time-varying exposure status induces systematic bias—termed “time-exposure-dependent bias”—which standard analyses routinely ignore, leading to underestimation of per-exposure protection. This work introduces the first mathematical framework to quantify this bias, deriving closed-form approximations that require no observed exposure data. Building on the Cox proportional hazards model and rigorous theoretical analysis, we formally characterize how temporal dependence in exposure affects VE estimation. Our results demonstrate that, under realistic epidemiological parameterizations, the bias can be substantial—e.g., VE may be overestimated by 10–30%. This framework provides novel methodological foundations for trial design, sensitivity analysis, and causal inference in infectious disease prevention studies.
📝 Abstract
Using time-to-event methods such as Cox proportional hazards models, it is well established that unmeasured heterogeneity in exposure or infection risk can lead to downward bias in point estimates of the per-contact vaccine efficacy (VE) in infectious disease trials. In this study, we explore an unreported source of bias-arising from temporally correlated exposure status-that is typically unmeasured and overlooked in standard analyses. Although this form of bias can plausibly affect a wide range of VE trials, it has received limited empirical attention. We develop a mathematical framework to characterize the mechanism of this bias and derive a closed-form approximation to quantify its magnitude without requiring direct measurement of exposure. Our findings show that, under realistic parameter settings, the resulting bias can be substantial. These results suggest that temporally correlated exposure should be recognized as a potentially important factor in the design and analysis of infectious disease vaccine trials.