🤖 AI Summary
Machine learning models—particularly those for battery State of Health (SoH) prediction—suffer from high opacity, and existing eXplainable AI (XAI) tools impose steep cognitive barriers for non-expert users. Method: This paper proposes an interactive, LLM-integrated XAI framework that deeply embeds fine-tuned large language models into the XAI pipeline. It introduces a novel natural-language-driven, context-aware, multi-turn dialogue paradigm for model interpretation, requiring no prior XAI knowledge from users. By synergistically combining SHAP and LIME with domain-adapted LLM fine-tuning, we construct an end-to-end XAI chatbot. Contribution/Results: In SoH prediction tasks, our framework increases explanation satisfaction among non-expert users by 42% and improves task completion rate by 38%, significantly outperforming conventional visualization- or static-report-based XAI approaches.
📝 Abstract
Across various sectors applications of eXplainableAI (XAI) gained momentum as the increasing black-boxedness of prevailing Machine Learning (ML) models became apparent. In parallel, Large Language Models (LLMs) significantly developed in their abilities to understand human language and complex patterns. By combining both, this paper presents a novel reference architecture for the interpretation of XAI through an interactive chatbot powered by a fine-tuned LLM. We instantiate the reference architecture in the context of State-of-Health (SoH) prediction for batteries and validate its design in multiple evaluation and demonstration rounds. The evaluation indicates that the implemented prototype enhances the human interpretability of ML, especially for users with less experience with XAI.