🤖 AI Summary
Under adaptive data collection, the i.i.d. assumption fails and conventional causal inference methods struggle to characterize counterfactual outcome distributions. To address this, this paper introduces the first extension of Kernel Treatment Effect (KTE) estimation to adaptive experimental settings, proposing a distributed causal inference framework grounded in Reproducing Kernel Hilbert Spaces (RKHS). Methodologically, it integrates doubly robust scoring, variance stabilization, and the Hilbert-space martingale central limit theorem to construct an asymptotically normal kernel-distance test statistic—capable of sensitively detecting both mean shifts and higher-order moment discrepancies. The approach provides strong theoretical guarantees, including precise Type-I error control. Empirically, it significantly outperforms adaptive baseline methods that estimate only scalar treatment effects, achieving superior statistical power and robustness.
📝 Abstract
Adaptive experiments improve efficiency by adjusting treatment assignments based on past outcomes, but this adaptivity breaks the i.i.d. assumptions that underpins classical asymptotics. At the same time, many questions of interest are distributional, extending beyond average effects. Kernel treatment effects (KTE) provide a flexible framework by representing counterfactual outcome distributions in an RKHS and comparing them via kernel distances. We present the first kernel-based framework for distributional inference under adaptive data collection. Our method combines doubly robust scores with variance stabilization to ensure asymptotic normality via a Hilbert-space martingale CLT, and introduces a sample-fitted stabilized test with valid type-I error. Experiments show it is well calibrated and effective for both mean shifts and higher-moment differences, outperforming adaptive baselines limited to scalar effects.