Dissecting Model Failures in Abdominal Aortic Aneurysm Segmentation through Explainability-Driven Analysis

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of segmenting complex abdominal aortic aneurysms (AAAs) in CT images, where irrelevant anatomical structures and low-contrast, thin pathological regions often lead to segmentation failures. The authors propose an explainable AI (XAI)-guided encoder-shaping framework that, for the first time, explicitly integrates attribution guidance into both representation learning and decoding. Specifically, a dense attribution focus map—termed the XAI field—is generated from the final encoder layer to align prediction probabilities with clinically critical regions. This field drives a lightweight refinement pathway and a confidence prior during inference, effectively suppressing distractions while preserving fine structural details. Built upon the SAM architecture, the method significantly outperforms the original SAM on clinically challenging, previously failed cases, demonstrating that XAI-guided focusing enhances segmentation reliability in complex scenarios.

Technology Category

Application Category

📝 Abstract
Computed tomography image segmentation of complex abdominal aortic aneurysms (AAA) often fails because the models assign internal focus to irrelevant structures or do not focus on thin, low-contrast targets. Where the model looks is the primary training signal, and thus we propose an Explainable AI (XAI) guided encoder shaping framework. Our method computes a dense, attribution-based encoder focus map ("XAI field") from the final encoder block and uses it in two complementary ways: (i) we align the predicted probability mass to the XAI field to promote agreement between focus and output; and (ii) we route the field into a lightweight refinement pathway and a confidence prior that modulates logits at inference, suppressing distractors while preserving subtle structures. The objective terms serve only as control signals; the contribution is the integration of attribution guidance into representation and decoding. We evaluate clinically validated challenging cases curated for failure-prone scenarios. Compared to a base SAM setup, our implementation yields substantial improvements. The observed gains suggest that explicitly optimizing encoder focus via XAI guidance is a practical and effective principle for reliable segmentation in complex scenarios.
Problem

Research questions and friction points this paper is trying to address.

abdominal aortic aneurysm
image segmentation
model failure
explainable AI
attention focus
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI
Encoder shaping
Attribution-based focus
Segmentation refinement
Confidence prior
🔎 Similar Papers
No similar papers found.
A
Abu Noman Md Sakib
University of Texas at San Antonio
M
Merjulah Roby
University of Texas at San Antonio
Zijie Zhang
Zijie Zhang
Assistant Professor, University of Texas at San Antonio
Trustworthy Machine LearningAdversaril A/DFederated LearningGraph
S
Satish Muluk
Drexel University
M
Mark K. Eskandari
Northwestern University
E
Ender A. Finol
University of Texas at San Antonio