🤖 AI Summary
This work addresses the challenge of training deep learning models for dark-field X-ray imaging due to the scarcity of real dark-field data by proposing an uncertainty-guided progressive generative adversarial network. For the first time, it enables high-quality synthesis of dark-field images from conventional chest radiographs. The method integrates aleatoric and epistemic uncertainties within an image-to-image translation framework, leveraging Bayesian deep learning and small-angle scattering modeling to progressively enhance structural fidelity. Experimental results demonstrate that the generated images exhibit stage-wise improvements in quantitative metrics and achieve strong generalization on out-of-distribution data. This approach significantly improves model reliability, interpretability, and clinical applicability.
📝 Abstract
X-ray dark-field radiography provides complementary diagnostic information to conventional attenuation imaging by visualizing microstructural tissue changes through small-angle scattering. However, the limited availability of such data poses challenges for developing robust deep learning models. In this work, we present the first framework for generating dark-field images directly from standard attenuation chest X-rays using an Uncertainty-Guided Progressive Generative Adversarial Network. The model incorporates both aleatoric and epistemic uncertainty to improve interpretability and reliability. Experiments demonstrate high structural fidelity of the generated images, with consistent improvement of quantitative metrics across stages. Furthermore, out-of-distribution evaluation confirms that the proposed model generalizes well. Our results indicate that uncertainty-guided generative modeling enables realistic dark-field image synthesis and provides a reliable foundation for future clinical applications.