Fast and Accurate Explanations of Distance-Based Classifiers by Uncovering Latent Explanatory Structures

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Weak interpretability of distance-based classifiers—such as k-nearest neighbors (k-NN) and support vector machines (SVM)—has long hindered their adoption in high-stakes domains. This paper first uncovers their implicit neural-network-like architecture: a composition of linear distance-detection units followed by nonlinear neighborhood-aggregation layers. Leveraging this insight, we propose a novel attribution framework specifically tailored for distance models, enabling seamless adaptation of explainable AI techniques—e.g., Layer-wise Relevance Propagation (LRP)—to classical non-neural classifiers. Our method decomposes decision-making by propagating relevance through the linear transformation inherent in distance metrics and the nonlinear pooling induced by neighborhood aggregation. Evaluated on multiple benchmark datasets, it consistently outperforms existing explanation baselines in fidelity and faithfulness. Furthermore, two real-world case studies—from scientific discovery and industrial deployment—demonstrate its practical interpretability, robustness to perturbations, and actionable insights.

Technology Category

Application Category

📝 Abstract
Distance-based classifiers, such as k-nearest neighbors and support vector machines, continue to be a workhorse of machine learning, widely used in science and industry. In practice, to derive insights from these models, it is also important to ensure that their predictions are explainable. While the field of Explainable AI has supplied methods that are in principle applicable to any model, it has also emphasized the usefulness of latent structures (e.g. the sequence of layers in a neural network) to produce explanations. In this paper, we contribute by uncovering a hidden neural network structure in distance-based classifiers (consisting of linear detection units combined with nonlinear pooling layers) upon which Explainable AI techniques such as layer-wise relevance propagation (LRP) become applicable. Through quantitative evaluations, we demonstrate the advantage of our novel explanation approach over several baselines. We also show the overall usefulness of explaining distance-based models through two practical use cases.
Problem

Research questions and friction points this paper is trying to address.

Uncover hidden neural network structures in distance-based classifiers
Apply Explainable AI techniques to distance-based models
Improve explanation accuracy for distance-based classifiers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncover hidden neural network structure
Apply layer-wise relevance propagation
Enhance explainability of distance-based classifiers
🔎 Similar Papers
No similar papers found.
F
Florian Bley
BIFOLD – Berlin Institute for the Foundations of Learning and Data, Berlin, Germany; Machine Learning Group, Technische Universität Berlin, Germany
J
Jacob Kauffmann
BIFOLD – Berlin Institute for the Foundations of Learning and Data, Berlin, Germany; Machine Learning Group, Technische Universität Berlin, Germany
S
Simon León Krug
Department of Chemistry and Applied Biosciences, ETH Zurich, Switzerland
Klaus-Robert Müller
Klaus-Robert Müller
TU Berlin & Korea University & Google DeepMind & Max Planck Institute for Informatics, Germany
Machine learningartificial intelligencebig datacomputational neuroscience
Grégoire Montavon
Grégoire Montavon
Professor, Charité / BIFOLD
Explainable AIMachine LearningData Science