🤖 AI Summary
This work addresses theoretical limitations of diffusion models in capturing complex spatiotemporal dependencies and quantifying uncertainty in time-series imputation. Methodologically, we propose a novel diffusion-transformer-based imputation framework: (i) we establish a conditional score function approximation theory, derive an upper bound on sample complexity, and—first in the literature—construct tight confidence regions for imputed values under missingness; (ii) we introduce a hybrid masking training strategy that jointly optimizes generation quality and uncertainty calibration. Theoretically, we characterize how missingness patterns affect statistical efficiency of imputation. Empirically, our method achieves significant improvements in imputation accuracy and robustness across diverse missingness mechanisms (e.g., MAR, MNAR, block-wise), while delivering verifiable, well-calibrated uncertainty estimates. This work establishes a new paradigm for trustworthy time-series imputation, grounded in rigorous theory and validated through comprehensive experiments.
📝 Abstract
Imputation methods play a critical role in enhancing the quality of practical time-series data, which often suffer from pervasive missing values. Recently, diffusion-based generative imputation methods have demonstrated remarkable success compared to autoregressive and conventional statistical approaches. Despite their empirical success, the theoretical understanding of how well diffusion-based models capture complex spatial and temporal dependencies between the missing values and observed ones remains limited. Our work addresses this gap by investigating the statistical efficiency of conditional diffusion transformers for imputation and quantifying the uncertainty in missing values. Specifically, we derive statistical sample complexity bounds based on a novel approximation theory for conditional score functions using transformers, and, through this, construct tight confidence regions for missing values. Our findings also reveal that the efficiency and accuracy of imputation are significantly influenced by the missing patterns. Furthermore, we validate these theoretical insights through simulation and propose a mixed-masking training strategy to enhance the imputation performance.