Enhancing the Interpretability of Rule-based Explanations through Information Retrieval

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In predicting upper-limb lymphedema following breast cancer radiotherapy, existing interpretable AI models lack clinically intelligible quantitative explanations. This paper proposes a novel method integrating information retrieval (IR) metrics with rule-based modeling: it is the first to apply standard IR evaluation measures—such as Average Precision (AP) and Normalized Discounted Cumulative Gain (NDCG)—to quantify the contribution strength of individual clinical risk factors within rule-based predictions; further, it embeds an attribution-based explanation framework to enable comparable and verifiable assessment of feature importance. Experiments demonstrate substantial improvements in explanation intuitiveness and clinical consistency; a user study confirms that clinicians find the output significantly more comprehensible and actionable for decision support than conventional XAI methods. The core innovation lies in transferring IR’s ranking-evaluation paradigm to medical interpretability modeling, thereby bridging algorithmic interpretability and clinical utility.

Technology Category

Application Category

📝 Abstract
The lack of transparency of data-driven Artificial Intelligence techniques limits their interpretability and acceptance into healthcare decision-making processes. We propose an attribution-based approach to improve the interpretability of Explainable AI-based predictions in the specific context of arm lymphedema's risk assessment after lymph nodal radiotherapy in breast cancer. The proposed method performs a statistical analysis of the attributes in the rule-based prediction model using standard metrics from Information Retrieval techniques. This analysis computes the relevance of each attribute to the prediction and provides users with interpretable information about the impact of risk factors. The results of a user study that compared the output generated by the proposed approach with the raw output of the Explainable AI model suggested higher levels of interpretability and usefulness in the context of predicting lymphedema risk.
Problem

Research questions and friction points this paper is trying to address.

Improving interpretability of AI predictions in healthcare
Assessing lymphedema risk after breast cancer radiotherapy
Using information retrieval to explain rule-based models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attribution-based approach enhances rule interpretability
Statistical analysis using Information Retrieval metrics
Computes attribute relevance for risk factor impact
🔎 Similar Papers
No similar papers found.