🤖 AI Summary
To address the limited interpretability of deep learning models in Raman spectroscopy–based medical diagnosis, this paper proposes SpecReX, a causal explainable AI method specifically designed for spectral data. SpecReX is the first to systematically integrate principled causal inference theory into Raman spectral interpretation: it iteratively perturbs spectral bands and evaluates classification consistency to quantify the causal responsibility of each band for model decisions, thereby generating verifiable responsibility maps that precisely localize disease-discriminative spectral regions. On multi-level synthetic spectra, SpecReX successfully recovers known inter-class spectral differences and significantly outperforms state-of-the-art attribution methods. Its controllable spectral synthesis pipeline and benchmark evaluation framework provide a traceable, empirically verifiable tool for discovering disease-specific spectral biomarkers—thereby strengthening the clinical credibility and translational potential of AI-assisted Raman diagnostics.
📝 Abstract
Raman spectroscopy is becoming more common for medical diagnostics with deep learning models being increasingly used to leverage its full potential. However, the opaque nature of such models and the sensitivity of medical diagnosis together with regulatory requirements necessitate the need for explainable AI tools. We introduce SpecReX, specifically adapted to explaining Raman spectra. SpecReX uses the theory of actual causality to rank causal responsibility in a spectrum, quantified by iteratively refining mutated versions of the spectrum and testing if it retains the original classification. The explanations provided by SpecReX take the form of a responsibility map, highlighting spectral regions most responsible for the model to make a correct classification. To assess the validity of SpecReX, we create increasingly complex simulated spectra, in which a"ground truth"signal is seeded, to train a classifier. We then obtain SpecReX explanations and compare the results with another explainability tool. By using simulated spectra we establish that SpecReX localizes to the known differences between classes, under a number of conditions. This provides a foundation on which we can find the spectral features which differentiate disease classes. This is an important first step in proving the validity of SpecReX.