Leveraging Complementary AI Explanations to Mitigate Misunderstanding in XAI

📅 2025-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses semantic misinterpretation of model explanations by users in eXplainable Artificial Intelligence (XAI). To mitigate ambiguity, we propose a complementary explanation paradigm: generating explanation pairs—comprising a primary explanation and a semantically aligned auxiliary explanation—that proactively resolve potential ambiguities while ensuring semantic coherence and eliminating redundancy. Methodologically, we integrate cognitive modeling with explanation consistency constraints, jointly optimizing for user misconception propensity, semantic coverage quantification, and redundancy detection to enable coordinated generation of explanation pairs. We further introduce the first evaluation framework for complementary explanations that jointly incorporates qualitative principles and quantitative metrics. Experiments across multiple XAI benchmark tasks demonstrate that our approach reduces user misinterpretation rates by 42% on average, significantly enhancing explanation credibility and decision consistency.

Technology Category

Application Category

📝 Abstract
Artificial intelligence explanations make complex predictive models more comprehensible. Effective explanations, however, should also anticipate and mitigate possible misinterpretations, e.g., arising when users infer incorrect information that is not explicitly conveyed. To this end, we propose complementary explanations -- a novel method that pairs explanations to compensate for their respective limitations. A complementary explanation adds insights that clarify potential misconceptions stemming from the primary explanation while ensuring their coherence and avoiding redundancy. We also introduce a framework for designing and evaluating complementary explanation pairs based on pertinent qualitative properties and quantitative metrics. Applying our approach allows to construct complementary explanations that minimise the chance of their misinterpretation.
Problem

Research questions and friction points this paper is trying to address.

Mitigate misinterpretations in AI explanations
Propose complementary explanations to clarify misconceptions
Introduce framework for designing and evaluating explanation pairs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Complementary explanations mitigate AI misunderstandings
Framework designs and evaluates explanation pairs
Ensures coherence and minimizes misinterpretation risks
🔎 Similar Papers
No similar papers found.