Insights into a radiology-specialised multimodal large language model with sparse autoencoders

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limited interpretability of MAIRA-2, a radiology-specific multimodal large language model. To tackle this, we propose a novel interpretability framework based on Matryoshka sparse autoencoders (SAEs)—the first application of this technique to medical multimodal foundation models. Methodologically, we perform large-scale SAE training to decompose internal model representations, automatically uncovering clinically relevant concepts—including imaging equipment types, lesion morphologies, and textual semantic units—and integrate feature steering for targeted generative behavior intervention. Experiments identify数十 clinically meaningful interpretable neurons, empirically validating concept-level controllability. Furthermore, we publicly release the trained SAE weights and an annotated interpretation dataset to advance transparency and reproducibility in medical AI research.

Technology Category

Application Category

📝 Abstract
Interpretability can improve the safety, transparency and trust of AI models, which is especially important in healthcare applications where decisions often carry significant consequences. Mechanistic interpretability, particularly through the use of sparse autoencoders (SAEs), offers a promising approach for uncovering human-interpretable features within large transformer-based models. In this study, we apply Matryoshka-SAE to the radiology-specialised multimodal large language model, MAIRA-2, to interpret its internal representations. Using large-scale automated interpretability of the SAE features, we identify a range of clinically relevant concepts - including medical devices (e.g., line and tube placements, pacemaker presence), pathologies such as pleural effusion and cardiomegaly, longitudinal changes and textual features. We further examine the influence of these features on model behaviour through steering, demonstrating directional control over generations with mixed success. Our results reveal practical and methodological challenges, yet they offer initial insights into the internal concepts learned by MAIRA-2 - marking a step toward deeper mechanistic understanding and interpretability of a radiology-adapted multimodal large language model, and paving the way for improved model transparency. We release the trained SAEs and interpretations: https://huggingface.co/microsoft/maira-2-sae.
Problem

Research questions and friction points this paper is trying to address.

Interpret radiology-specialised multimodal large language model MAIRA-2
Identify clinically relevant concepts using sparse autoencoders
Improve model transparency and interpretability in healthcare
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse autoencoders interpret radiology-specialised multimodal model
Matryoshka-SAE reveals clinically relevant medical features
Steering demonstrates directional control over model generations
🔎 Similar Papers
No similar papers found.
Kenza Bouzid
Kenza Bouzid
Microsoft Research
Machine LearningComputer Vision
Shruthi Bannur
Shruthi Bannur
Microsoft Research
Machine LearningDeep LearningComputer VisionNatural Language Processing
Daniel Coelho de Castro
Daniel Coelho de Castro
Microsoft Research
Machine LearningMedical ImagingComputer Vision
A
Anton Schwaighofer
Microsoft Research, Health Futures, Cambridge, United Kingdom
J
Javier Alvarez-Valle
Microsoft Research, Health Futures, Cambridge, United Kingdom
S
Stephanie L. Hyland
Microsoft Research, Health Futures, Cambridge, United Kingdom