Backdoor-based Explainable AI Benchmark for High Fidelity Evaluation of Attribution Methods

📅 2024-05-02
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing attribution methods lack ground-truth causal explanations, rendering faithfulness evaluation unreliable. Method: This paper introduces BackX, a high-fidelity explainable AI benchmark that—uniquely—injects controllable causal attribution signals via backdoor triggers, rigorously satisfying faithfulness criteria including completeness, causality, and controllability. Contribution/Results: We provide theoretical guarantees that BackX outperforms both synthetic and real-world benchmarks in faithfulness assessment. We establish a standardized evaluation protocol incorporating attribution post-processing and cross-model consistency analysis. Empirically, BackX enables reproducible, high-discriminative evaluation across 12 state-of-the-art attribution methods, systematically exposing their causal faithfulness deficiencies. Furthermore, it inspires a novel attribution-based backdoor detection paradigm.

Technology Category

Application Category

📝 Abstract
Attribution methods compute importance scores for input features to explain the output predictions of deep models. However, accurate assessment of attribution methods is challenged by the lack of benchmark fidelity for attributing model predictions. Moreover, other confounding factors in attribution estimation, including the setup choices of post-processing techniques and explained model predictions, further compromise the reliability of the evaluation. In this work, we first identify a set of fidelity criteria that reliable benchmarks for attribution methods are expected to fulfill, thereby facilitating a systematic assessment of attribution benchmarks. Next, we introduce a Backdoor-based eXplainable AI benchmark (BackX) that adheres to the desired fidelity criteria. We theoretically establish the superiority of our approach over the existing benchmarks for well-founded attribution evaluation. With extensive analysis, we also identify a setup for a consistent and fair benchmarking of attribution methods across different underlying methodologies. This setup is ultimately employed for a comprehensive comparison of existing methods using our BackX benchmark. Finally, our analysis also provides guidance for defending against backdoor attacks with the help of attribution methods.
Problem

Research questions and friction points this paper is trying to address.

Evaluating faithfulness of attribution methods without ground truth
Developing benchmark criteria for systematic attribution assessment
Establishing standardized evaluation setup to mitigate confounding factors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Backdoor-based benchmark for explainable AI evaluation
Theoretical superiority over existing attribution benchmarks
Standardized setup mitigating post-processing confounding factors
🔎 Similar Papers
No similar papers found.
Peiyu Yang
Peiyu Yang
Master in Robotics, TUD
Robotics
N
Naveed Akhtar
The University of Melbourne, Grattan Street, Parkville Victoria, 3010, Australia
J
Jiantong Jiang
The University of Western Australia, Crawley, WA, 6009, Australia
A
Ajmal Mian
The University of Western Australia, Crawley, WA, 6009, Australia