SHAPCA: Consistent and Interpretable Explanations for Machine Learning Models on Spectroscopy Data

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the instability in model interpretability caused by high-dimensional and highly collinear spectral data, which often obscures the relationship between original signals and predictive rationale. To resolve this, the authors propose a novel approach that integrates principal component analysis (PCA) with SHAP (SHapley Additive exPlanations). The method first employs PCA for dimensionality reduction to enhance model stability, then maps SHAP-based explanations back to the original spectral space, thereby preserving both global and local interpretability. This work is the first to achieve consistent representation of feature importance—derived from reduced dimensions—in the original input space, effectively mitigating the interpretability disconnect commonly induced by conventional dimensionality reduction. Experimental results demonstrate that the proposed method significantly improves explanation stability across repeated runs and accurately links critical spectral bands to their corresponding biochemical components, thereby enhancing model trustworthiness and practical utility.

Technology Category

Application Category

📝 Abstract
In recent years, machine learning models have been increasingly applied to spectroscopic datasets for chemical and biomedical analysis. For their successful adoption, particularly in clinical and safety-critical settings, professionals and researchers must be able to understand and trust the reasoning behind model predictions. However, the inherently high dimensionality and strong collinearity of spectroscopy data pose a fundamental challenge to model explainability. These properties not only complicate model training but also undermine the stability and consistency of explanations, leading to fluctuations in feature importance across repeated training runs. Feature extraction techniques have been used to reduce the input dimensionality; these new features hinder the connection between the prediction and the original signal. This study proposes SHAPCA, an explainable machine learning pipeline that combines Principal Component Analysis (for dimensionality reduction) and Shapely Additive exPlanations (for post hoc explanation) to provide explanations in the original input space, which a practitioner can interpret and link back to the biological components. The proposed framework enables analysis from both global and local perspectives, revealing the spectral bands that drive overall model behaviour as well as the instance-specific features that influence individual predictions. Numerical analysis demonstrated the interpretability of the results and greater consistency across different runs.
Problem

Research questions and friction points this paper is trying to address.

spectroscopy data
model explainability
high dimensionality
feature importance consistency
interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

SHAPCA
Explainable AI
Spectroscopy
Principal Component Analysis
SHAP
🔎 Similar Papers