🤖 AI Summary
The “black-box” nature of AI systems severely undermines expert trust, particularly in high-stakes domains such as healthcare. To address this, we propose a visualization analytics framework spanning the entire AI lifecycle—encompassing data preprocessing, feature engineering, model training, hyperparameter optimization, and model comparison—within a unified design space. Our approach innovatively integrates interactive visualization with machine learning workflows, yielding an explainable AI (XAI) visual analytics dashboard. This framework enables multi-stage model understanding, diagnostic analysis, and collaborative optimization, facilitating end-to-end transparent exploration from input to output. Empirical evaluation demonstrates that our method significantly enhances domain experts’ depth of understanding of model decision logic and their trust in AI recommendations. The framework has been validated in real-world clinical decision-support scenarios, confirming its effectiveness and practical utility.
📝 Abstract
Our society increasingly depends on intelligent systems to solve complex problems, ranging from recommender systems suggesting the next movie to watch to AI models assisting in medical diagnoses for hospitalized patients. With the iterative improvement of diagnostic accuracy and efficiency, AI holds significant potential to mitigate medical misdiagnoses by preventing numerous deaths and reducing an economic burden of approximately 450 EUR billion annually. However, a key obstacle to AI adoption lies in the lack of transparency: many automated systems function as "black boxes," providing predictions without revealing the underlying processes. This opacity can hinder experts' ability to trust and rely on AI systems. Visual analytics (VA) provides a compelling solution by combining AI models with interactive visualizations. These specialized charts and graphs empower users to incorporate their domain expertise to refine and improve the models, bridging the gap between AI and human understanding. In this work, we define, categorize, and explore how VA solutions can foster trust across the stages of a typical AI pipeline. We propose a design space for innovative visualizations and present an overview of our previously developed VA dashboards, which support critical tasks within the various pipeline stages, including data processing, feature engineering, hyperparameter tuning, understanding, debugging, refining, and comparing models.