Accelerating Anchors via Specialization and Feature Transformation

📅 2025-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Anchors—a local, model-agnostic explanation method—suffers from high computational overhead, limiting its applicability in real-time scenarios. To address this, we propose the first pretraining-based acceleration framework for Anchors. Starting from general-purpose pretrained explanations, our method introduces a two-stage rule transformation mechanism: horizontal (feature substitution) and vertical (iterative refinement), jointly preserving input specificity while enhancing generalization. The framework supports multimodal data—including tabular, textual, and image inputs. This work pioneers the application of the pretraining paradigm to rule-set explanation generation. Evaluated across diverse benchmark datasets, our approach achieves an average 3.2× speedup over standard Anchors, while maintaining comparable explanation fidelity and interpretability. Consequently, it significantly improves the practicality of Anchors in latency-critical applications.

Technology Category

Application Category

📝 Abstract
Anchors is a popular local model-agnostic explanation technique whose applicability is limited by its computational inefficiency. To address this limitation, we propose a pre-training-based approach to accelerate Anchors without compromising the explanation quality. Our approach leverages the iterative nature of Anchors' algorithm which gradually refines an explanation until it is precise enough for a given input by providing a general explanation that is obtained through pre-training as Anchors' initial explanation. Specifically, we develop a two-step rule transformation process: the horizontal transformation adapts a pre-trained explanation to the current input by replacing features, and the vertical transformation refines the general explanation until it is precise enough for the input. We evaluate our method across tabular, text, and image datasets, demonstrating that it significantly reduces explanation generation time while maintaining fidelity and interpretability, thereby enabling the practical adoption of Anchors in time-sensitive applications.
Problem

Research questions and friction points this paper is trying to address.

Accelerate Anchors explanation technique
Maintain explanation quality during acceleration
Enable practical use in time-sensitive applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-training-based explanation acceleration
Horizontal and vertical rule transformation
Maintains fidelity and interpretability
🔎 Similar Papers
No similar papers found.
Haonan Yu
Haonan Yu
Research Scientist, Skild AI
RoboticsDeep Reinforcement LearningMultimodal Learning
J
Junhao Liu
School of Computer Science, Peking University, Beijing, China; Key Lab of High Confidence Software Technologies (Peking University), Ministry of Education, Beijing, China
X
Xin Zhang
School of Computer Science, Peking University, Beijing, China; Key Lab of High Confidence Software Technologies (Peking University), Ministry of Education, Beijing, China