IP-CRR: Information Pursuit for Interpretable Classification of Chest Radiology Reports

📅 2025-04-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient interpretability and low clinical trust in AI-based chest X-ray report classification, this paper proposes an endogenous interpretable diagnostic model grounded in the principle of “explanation-as-mechanism.” Methodologically, it leverages Information Pursuit theory to identify highly discriminative clinical queries, integrating a Flan-T5–driven fact-checking module with a lightweight classifier to jointly generate disease classifications and query-answer–style explanations. Notably, this work pioneers the application of Information Pursuit to structured explanation generation for radiology reports. Evaluated on the MIMIC-CXR dataset, the model achieves high classification accuracy while producing clinically intelligible, traceable explanation chains—enhancing radiologists’ understanding of and confidence in AI decisions. This approach establishes a novel paradigm for deploying interpretable AI in radiology practice.

Technology Category

Application Category

📝 Abstract
The development of AI-based methods for analyzing radiology reports could lead to significant advances in medical diagnosis--from improving diagnostic accuracy to enhancing efficiency and reducing workload. However, the lack of interpretability in these methods has hindered their adoption in clinical settings. In this paper, we propose an interpretable-by-design framework for classifying radiology reports. The key idea is to extract a set of most informative queries from a large set of reports and use these queries and their corresponding answers to predict a diagnosis. Thus, the explanation for a prediction is, by construction, the set of selected queries and answers. We use the Information Pursuit framework to select informative queries, the Flan-T5 model to determine if facts are present in the report, and a classifier to predict the disease. Experiments on the MIMIC-CXR dataset demonstrate the effectiveness of the proposed method, highlighting its potential to enhance trust and usability in medical AI.
Problem

Research questions and friction points this paper is trying to address.

Develop interpretable AI for chest radiology report classification
Address lack of interpretability in medical AI diagnostics
Enhance trust in AI predictions via query-based explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interpretable-by-design framework for radiology reports
Information Pursuit selects most informative queries
Flan-T5 model detects facts for diagnosis
🔎 Similar Papers
No similar papers found.