🤖 AI Summary
In sequential clinical trials, adaptive stopping decisions triggered by interim analyses introduce informative bias, leading to overoptimistic posterior estimates of treatment effects and miscalibrated credible intervals. To address this, we propose the first information-theoretic framework for quantifying such bias: it integrates the Kullback–Leibler divergence over the parameter space to measure the distortion imposed by interim decisions on final Bayesian inference, and defines its expectation as a design sensitivity metric. The method unifies Bayesian inference, information entropy theory, and group-sequential design, enabling explicit characterization of deviations between decision boundaries and prior specifications. Validated on a real-world central nervous system disease trial, the framework substantially improves posterior calibration. It provides actionable, quantitative guidance for optimizing adaptive trial designs and interpreting their results.
📝 Abstract
Group sequential designs enable interim analyses and potential early stopping for efficacy or futility. While these adaptations improve trial efficiency and ethical considerations, they also introduce bias into the adapted analyses. We demonstrate how failing to account for informative interim decisions in the analysis can substantially affect posterior estimates of the treatment effect, often resulting in overly optimistic credible intervals aligned with the stopping decision. Drawing on information theory, we use the Kullback-Leibler divergence to quantify this distortion and highlight its use for post-hoc evaluation of informative interim decisions, with a focus on end-of-study inference. Unlike pointwise comparisons, this measure provides an integrated summary of this distortion on the whole parameter space. By comparing alternative decision boundaries and prior specifications, we illustrate how this measure can improve the understanding of trial results and inform the planning of future adaptive studies. We also introduce an expected version of this metric to support clinicians in choosing decision boundaries. This guidance complements traditional strategies based on type-I error rate control by offering insights into the distortion introduced to the treatment effect at each interim phase. The use of this pre-experimental measure is finally illustrated in a group sequential trial for evaluating a treatment for central nervous system disorders.