🤖 AI Summary
Deep learning–based CT reconstruction often lacks reliable confidence estimation, making it difficult to identify hallucinations and assess result credibility. This work proposes a sequential likelihood ensemble framework for uncertainty quantification, establishing—under a Poisson noise forward model consistent with the Beer–Lambert law—the first theoretically guaranteed confidence regions for deep CT reconstruction. The method is broadly applicable across diverse reconstruction architectures, including U-Nets, diffusion models, and classical algorithms. It substantially outperforms conventional approaches by yielding substantially tighter confidence regions, effectively detecting reconstruction hallucinations, and enabling interpretable visualizations. These capabilities enhance the reliability and safety of medical image reconstruction in clinical settings.
📝 Abstract
We present a principled framework for confidence estimation in computed tomography (CT) reconstruction. Based on the sequential likelihood mixing framework (Kirschner et al., 2025), we establish confidence regions with theoretical coverage guarantees for deep-learning-based CT reconstructions. We consider a realistic forward model following the Beer-Lambert law, i.e., a log-linear forward model with Poisson noise, closely reflecting clinical and scientific imaging conditions. The framework is general and applies to both classical algorithms and deep learning reconstruction methods, including U-Nets, U-Net ensembles, and generative Diffusion models. Empirically, we demonstrate that deep reconstruction methods yield substantially tighter confidence regions than classical reconstructions, without sacrificing theoretical coverage guarantees. Our approach allows the detection of hallucinations in reconstructed images and provides interpretable visualizations of confidence regions. This establishes deep models not only as powerful estimators, but also as reliable tools for uncertainty-aware medical imaging.