🤖 AI Summary
To address the lack of interpretability in automatic detection and segmentation of renal cysts in CT imaging, this paper proposes the first dual-path interpretable framework integrating counterfactual reasoning and Bayesian uncertainty modeling. First, a VAE-GAN generates causally grounded counterfactual images, and gradient-based editing uncovers causal relationships between image features and segmentation outputs. Second, posterior sampling over weight space constructs an uncertainty map to precisely localize high-uncertainty regions. Evaluated on multi-center 3D CT data, the method achieves state-of-the-art Dice scores for segmentation; counterfactual images yield segmentation performance statistically indistinguishable from original images; and radiomic biomarkers with positive/negative predictive value are successfully identified. This work significantly advances pixel-level interpretability, model robustness, and clinical trustworthiness.
📝 Abstract
Routine computed tomography (CT) scans often detect a wide range of renal cysts, some of which may be malignant. Early and precise localization of these cysts can significantly aid quantitative image analysis. Current segmentation methods, however, do not offer sufficient interpretability at the feature and pixel levels, emphasizing the necessity for an explainable framework that can detect and rectify model inaccuracies. We developed an interpretable segmentation framework and validated it on a multi-centric dataset. A Variational Autoencoder Generative Adversarial Network (VAE-GAN) was employed to learn the latent representation of 3D input patches and reconstruct input images. Modifications in the latent representation using the gradient of the segmentation model generated counterfactual explanations for varying dice similarity coefficients (DSC). Radiomics features extracted from these counterfactual images, using a ground truth cyst mask, were analyzed to determine their correlation with segmentation performance. The DSCs for the original and VAE-GAN reconstructed images for counterfactual image generation showed no significant differences. Counterfactual explanations highlighted how variations in cyst image features influence segmentation outcomes and showed model discrepancies. Radiomics features correlating positively and negatively with dice scores were identified. The uncertainty of the predicted segmentation masks was estimated using posterior sampling of the weight space. The combination of counterfactual explanations and uncertainty maps provided a deeper understanding of the image features within the segmented renal cysts that lead to high uncertainty. The proposed segmentation framework not only achieved high segmentation accuracy but also increased interpretability regarding how image features impact segmentation performance.