🤖 AI Summary
This work addresses the limitations of traditional local explanation methods in NLP, such as LIME, which generate semantically invalid or out-of-distribution perturbations through random token masking, resulting in low-fidelity surrogate models. Existing generative approaches often introduce confounding variables via unconstrained rewrites, making it difficult to isolate individual feature contributions. To overcome these issues, the authors propose LIME-LLM, a novel framework that integrates large language models with a controlled perturbation mechanism. By employing a “single-mask–single-sample” protocol and neutral or boundary-filling strategies, LIME-LLM generates fluent, manifold-preserving counterfactuals that strictly isolate feature effects. Evaluated on CoLA, SST-2, and HateXplain benchmarks—and using human-annotated rationales as ground truth—the method achieves state-of-the-art local explanation fidelity, significantly outperforming LIME, SHAP, Integrated Gradients, and LLiMe.
📝 Abstract
Local explanation methods such as LIME (Ribeiro et al., 2016) remain fundamental to trustworthy AI, yet their application to NLP is limited by a reliance on random token masking. These heuristic perturbations frequently generate semantically invalid, out-of-distribution inputs that weaken the fidelity of local surrogate models. While recent generative approaches such as LLiMe (Angiulli et al., 2025b) attempt to mitigate this by employing Large Language Models for neighborhood generation, they rely on unconstrained paraphrasing that introduces confounding variables, making it difficult to isolate specific feature contributions. We introduce LIME-LLM, a framework that replaces random noise with hypothesis-driven, controlled perturbations. By enforcing a strict"Single Mask-Single Sample"protocol and employing distinct neutral infill and boundary infill strategies, LIME-LLM constructs fluent, on-manifold neighborhoods that rigorously isolate feature effects. We evaluate our method against established baselines (LIME, SHAP, Integrated Gradients) and the generative LLiMe baseline across three diverse benchmarks: CoLA, SST-2, and HateXplain using human-annotated rationales as ground truth. Empirical results demonstrate that LIME-LLM establishes a new benchmark for black-box NLP explainability, achieving significant improvements in local explanation fidelity compared to both traditional perturbation-based methods and recent generative alternatives.