🤖 AI Summary
In high-stakes scenarios, users often struggle to effectively reason with and validate AI explanations. To address this, we propose a bidirectional explainable AI (XAI) framework that establishes a reflective “explanation–feedback–reconstruction” loop: user-generated free-text insights—elicited during interaction—are semantically parsed by a large language model (LLM), structured into interpretable representations, and dynamically fed back into a multi-view visualization system. The method integrates LLM-driven semantic parsing, interactive annotation, and coordinated visualization to enable real-time coupling between user cognition and visual explanations. Evaluated on two real-world case studies, the framework significantly improves explanation usability and cognitive depth; user feedback confirms its capacity to support deeper, more deliberate interactive understanding. Our core contribution is the first systematic integration of user-generated high-level insights back into the visualization pipeline—extending the paradigm of human-AI collaborative explanation in XAI.
📝 Abstract
As AI systems become increasingly integrated into high-stakes domains, enabling users to accurately interpret model behavior is critical. While AI explanations can be provided, users often struggle to reason effectively with these explanations, limiting their ability to validate or learn from AI decisions. To address this gap, we introduce Reverse Mapping, a novel approach that enhances visual explanations by incorporating user-derived insights back into the explanation workflow. Our system extracts structured insights from free-form user interpretations using a large language model and maps them back onto visual explanations through interactive annotations and coordinated multi-view visualizations. Inspired by the verification loop in the visualization knowledge generation model, this design aims to foster more deliberate, reflective interaction with AI explanations. We demonstrate our approach in a prototype system with two use cases and qualitative user feedback.