xMIL: Insightful Explanations for Multiple Instance Learning in Histopathology

📅 2024-06-06
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor interpretability of Multiple Instance Learning (MIL) models in digital pathology—hindering pathologists’ understanding of biomarker prediction rationales—this paper proposes xMIL, the first framework to systematically integrate Layer-wise Relevance Propagation (LRP) into MIL. Unlike conventional approaches relying on the small-bag assumption and instance independence, xMIL explicitly models inter-instance interactions, thereby enhancing biological plausibility and clinical interpretability. Evaluated on four real-world histopathological datasets, xMIL achieves significantly higher explanation faithfulness than state-of-the-art methods. Its open-source implementation demonstrates practical utility in clinical knowledge discovery and model debugging. The core contribution is the establishment of an MIL-specific LRP interpretability paradigm, uniquely reconciling weak supervision constraints with domain-specific histopathological semantics.

Technology Category

Application Category

📝 Abstract
Multiple instance learning (MIL) is an effective and widely used approach for weakly supervised machine learning. In histopathology, MIL models have achieved remarkable success in tasks like tumor detection, biomarker prediction, and outcome prognostication. However, MIL explanation methods are still lagging behind, as they are limited to small bag sizes or disregard instance interactions. We revisit MIL through the lens of explainable AI (XAI) and introduce xMIL, a refined framework with more general assumptions. We demonstrate how to obtain improved MIL explanations using layer-wise relevance propagation (LRP) and conduct extensive evaluation experiments on three toy settings and four real-world histopathology datasets. Our approach consistently outperforms previous explanation attempts with particularly improved faithfulness scores on challenging biomarker prediction tasks. Finally, we showcase how xMIL explanations enable pathologists to extract insights from MIL models, representing a significant advance for knowledge discovery and model debugging in digital histopathology. Codes are available at: https://github.com/bifold-pathomics/xMIL.
Problem

Research questions and friction points this paper is trying to address.

Multi-instance Learning
Pathology
Interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

xMIL
Interpretable AI
Multi-Instance Learning in Pathology
🔎 Similar Papers
No similar papers found.
Julius Hense
Julius Hense
PhD Student at BIFOLD, TU Berlin
Computational PathologyExplainable AIMultimodal LearningRepresentation Learning
M
M. J. Idaji
BIFOLD – Berlin Institute for the Foundations of Learning and Data, Berlin, Germany; Machine Learning Group, Technische Universität Berlin, Berlin, Germany
Oliver Eberle
Oliver Eberle
TU Berlin
Explainable AIInterpretabilityDeep LearningMachine LearningNLP
Thomas Schnake
Thomas Schnake
Technical University of Berlin
Machine Learning
Jonas Dippel
Jonas Dippel
TU Berlin
Laure Ciernik
Laure Ciernik
PhD, TU Berlin
O
Oliver Buchstab
Institute of Pathology, Ludwig-Maximilians-Universität, Munich, Germany
A
Andreas Mock
Institute of Pathology, Ludwig-Maximilians-Universität, Munich, Germany; German Cancer Research Center (DKFZ) & German Cancer Consortium (DKTK), Munich Partner Site, Munich, Germany
Frederick Klauschen
Frederick Klauschen
Institute of Pathology, University of Munich (LMU)
PathologyDigital Pathology/AIPrecision MedicineMolecular DiagnosticsBioinformatics
Klaus-Robert Müller
Klaus-Robert Müller
TU Berlin & Korea University & Google DeepMind & Max Planck Institute for Informatics, Germany
Machine learningartificial intelligencebig datacomputational neuroscience