EnsembleSHAP: Faithful and Certifiably Robust Attribution for Random Subspace Method

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of efficient and provably secure feature attribution mechanisms in existing random subspace methods. The authors propose EnsembleSHAP, a feature attribution approach inherently integrated into the random subspace framework that leverages computational byproducts for efficient, faithful, and provably robust explanations. EnsembleSHAP is the first method to offer provable robustness against explanation-preserving attacks while maintaining local accuracy, computational efficiency, and privacy. Grounded in Shapley values and exploiting the intrinsic structure of random subspaces, EnsembleSHAP demonstrates superior explanatory effectiveness and security across diverse threat models, including backdoor, adversarial, and jailbreaking attacks.
📝 Abstract
Random subspace method has wide security applications such as providing certified defenses against adversarial and backdoor attacks, and building robustly aligned LLM against jailbreaking attacks. However, the explanation of random subspace method lacks sufficient exploration. Existing state-of-the-art feature attribution methods, such as Shapley value and LIME, are computationally impractical and lacks security guarantee when applied to random subspace method. In this work, we propose EnsembleSHAP, an intrinsically faithful and secure feature attribution for random subspace method that reuses its computational byproducts. Specifically, our feature attribution method is 1) computationally efficient, 2) maintains essential properties of effective feature attribution (such as local accuracy), and 3) offers guaranteed protection against privacy-preserving attacks on feature attribution methods. To the best of our knowledge, this is the first work to establish provable robustness against explanation-preserving attacks. We also perform comprehensive evaluations for our explanation's effectiveness when faced with different empirical attacks, including backdoor attacks, adversarial attacks, and jailbreak attacks. The code is at https://github.com/Wang-Yanting/EnsembleSHAP. WARNING: This document may include content that could be considered harmful.
Problem

Research questions and friction points this paper is trying to address.

feature attribution
random subspace method
security guarantee
explanation-preserving attacks
computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

EnsembleSHAP
random subspace method
feature attribution
certifiable robustness
explanation-preserving attacks
🔎 Similar Papers
No similar papers found.