🤖 AI Summary
This study addresses the limited interpretability of MAIRA-2, a radiology-specific multimodal large language model. To tackle this, we propose a novel interpretability framework based on Matryoshka sparse autoencoders (SAEs)—the first application of this technique to medical multimodal foundation models. Methodologically, we perform large-scale SAE training to decompose internal model representations, automatically uncovering clinically relevant concepts—including imaging equipment types, lesion morphologies, and textual semantic units—and integrate feature steering for targeted generative behavior intervention. Experiments identify数十 clinically meaningful interpretable neurons, empirically validating concept-level controllability. Furthermore, we publicly release the trained SAE weights and an annotated interpretation dataset to advance transparency and reproducibility in medical AI research.
📝 Abstract
Interpretability can improve the safety, transparency and trust of AI models, which is especially important in healthcare applications where decisions often carry significant consequences. Mechanistic interpretability, particularly through the use of sparse autoencoders (SAEs), offers a promising approach for uncovering human-interpretable features within large transformer-based models. In this study, we apply Matryoshka-SAE to the radiology-specialised multimodal large language model, MAIRA-2, to interpret its internal representations. Using large-scale automated interpretability of the SAE features, we identify a range of clinically relevant concepts - including medical devices (e.g., line and tube placements, pacemaker presence), pathologies such as pleural effusion and cardiomegaly, longitudinal changes and textual features. We further examine the influence of these features on model behaviour through steering, demonstrating directional control over generations with mixed success. Our results reveal practical and methodological challenges, yet they offer initial insights into the internal concepts learned by MAIRA-2 - marking a step toward deeper mechanistic understanding and interpretability of a radiology-adapted multimodal large language model, and paving the way for improved model transparency. We release the trained SAEs and interpretations: https://huggingface.co/microsoft/maira-2-sae.