🤖 AI Summary
To address the limited interpretability, high false-positive rates, and low analyst trust in LLM-based anomaly detection systems, this paper proposes a dual-visual explanation framework integrating BERTViz (for visualizing attention flows) and Captum (for feature attribution), coupled with automated natural language explanation generation. The method employs a RoBERTa-based detection model, achieving 99.6% accuracy on the HDFS log dataset—substantially outperforming Falcon-7B, DeBERTa, and Mistral-7B. Experimental results demonstrate significant improvements in explanation readability and credibility; user feedback confirms accelerated anomaly triage and reduced false-positive interference. The core contribution is the first integration of dual-visual attribution with natural language generation, delivering an end-to-end interpretable solution for log-based anomaly detection.
📝 Abstract
Conversational AI and Large Language Models (LLMs) have become powerful tools across domains, including cybersecurity, where they help detect threats early and improve response times. However, challenges such as false positives and complex model management still limit trust. Although Explainable AI (XAI) aims to make AI decisions more transparent, many security analysts remain uncertain about its usefulness. This study presents a framework that detects anomalies and provides high-quality explanations through visual tools BERTViz and Captum, combined with natural language reports based on attention outputs. This reduces manual effort and speeds up remediation. Our comparative analysis showed that RoBERTa offers high accuracy (99.6 %) and strong anomaly detection, outperforming Falcon-7B and DeBERTa, as well as exhibiting better flexibility than large-scale Mistral-7B on the HDFS dataset from LogHub. User feedback confirms the chatbot's ease of use and improved understanding of anomalies, demonstrating the ability of the developed framework to strengthen cybersecurity workflows.