Enhancing LIME using Neural Decision Trees

📅 2026-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited fidelity of traditional LIME, which relies on surrogate models such as linear regression or decision trees that struggle to accurately capture the complex nonlinear local decision boundaries of black-box models. To overcome this limitation, the authors propose NDT-LIME, a novel approach that integrates neural decision trees (NDTs) into the LIME framework as the surrogate model. By leveraging the hierarchical structure and enhanced nonlinear representational capacity of NDTs, NDT-LIME achieves more accurate approximation of the black-box model’s local behavior while preserving interpretability. Experimental results across multiple standard tabular datasets demonstrate that NDT-LIME consistently and significantly outperforms conventional LIME in terms of explanation fidelity.

Technology Category

Application Category

📝 Abstract
Interpreting complex machine learning models is a critical challenge, especially for tabular data where model transparency is paramount. Local Interpretable Model-Agnostic Explanations (LIME) has been a very popular framework for interpretable machine learning, also inspiring many extensions. While traditional surrogate models used in LIME variants (e.g. linear regression and decision trees) offer a degree of stability, they can struggle to faithfully capture the complex non-linear decision boundaries that are inherent in many sophisticated black-box models. This work contributes toward bridging the gap between high predictive performance and interpretable decision-making. Specifically, we propose the NDT-LIME variant that integrates Neural Decision Trees (NDTs) as surrogate models. By leveraging the structured, hierarchical nature of NDTs, our approach aims at providing more accurate and meaningful local explanations. We evaluate its effectiveness on several benchmark tabular datasets, showing consistent improvements in explanation fidelity over traditional LIME surrogates.
Problem

Research questions and friction points this paper is trying to address.

interpretability
LIME
non-linear decision boundaries
surrogate models
tabular data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural Decision Trees
LIME
Interpretable Machine Learning
Surrogate Models
Explanation Fidelity
🔎 Similar Papers
No similar papers found.
M
Mohamed Aymen Bouyahia
ENS Paris-Saclay, Centre Borelli, CNRS, Gif-Sur-Yvette, France; Le Crédit Lyonnais, Paris, France
Argyris Kalogeratos
Argyris Kalogeratos
Senior Research Scientinst, Centre Borelli, ENS Paris-Saclay, Université Paris-Saclay
Machine LearningArtificial IntelligenceComplex Networks