🤖 AI Summary
Anchors—a local, model-agnostic explanation method—suffers from high computational overhead, limiting its applicability in real-time scenarios. To address this, we propose the first pretraining-based acceleration framework for Anchors. Starting from general-purpose pretrained explanations, our method introduces a two-stage rule transformation mechanism: horizontal (feature substitution) and vertical (iterative refinement), jointly preserving input specificity while enhancing generalization. The framework supports multimodal data—including tabular, textual, and image inputs. This work pioneers the application of the pretraining paradigm to rule-set explanation generation. Evaluated across diverse benchmark datasets, our approach achieves an average 3.2× speedup over standard Anchors, while maintaining comparable explanation fidelity and interpretability. Consequently, it significantly improves the practicality of Anchors in latency-critical applications.
📝 Abstract
Anchors is a popular local model-agnostic explanation technique whose applicability is limited by its computational inefficiency. To address this limitation, we propose a pre-training-based approach to accelerate Anchors without compromising the explanation quality. Our approach leverages the iterative nature of Anchors' algorithm which gradually refines an explanation until it is precise enough for a given input by providing a general explanation that is obtained through pre-training as Anchors' initial explanation. Specifically, we develop a two-step rule transformation process: the horizontal transformation adapts a pre-trained explanation to the current input by replacing features, and the vertical transformation refines the general explanation until it is precise enough for the input. We evaluate our method across tabular, text, and image datasets, demonstrating that it significantly reduces explanation generation time while maintaining fidelity and interpretability, thereby enabling the practical adoption of Anchors in time-sensitive applications.