🤖 AI Summary
This work addresses the challenge that non-experts often produce low-quality visualizations in specialized domains due to insufficient domain knowledge, thereby consuming substantial expert time and creating organizational bottlenecks. To overcome this, the paper introduces the first systematic framework that structures experts’ tacit knowledge into explicit rules and design principles, integrating a request classifier, retrieval-augmented generation (RAG), and a multi-agent architecture to enhance large language models’ (LLMs) autonomous, reactive, proactive, and social capabilities. Evaluated across five engineering scenarios, the approach achieves a 206% improvement in output quality, with all generated visualizations rated at expert level; furthermore, the synthesized code exhibits higher quality and lower variance compared to baseline methods.
📝 Abstract
Critical domain knowledge typically resides with few experts, creating organizational bottlenecks in scalability and decision-making. Non-experts struggle to create effective visualizations, leading to suboptimal insights and diverting expert time. This paper investigates how to capture and embed human domain knowledge into AI agent systems through an industrial case study. We propose a software engineering framework to capture human domain knowledge for engineering AI agents in simulation data visualization by augmenting a Large Language Model (LLM) with a request classifier, Retrieval-Augmented Generation (RAG) system for code generation, codified expert rules, and visualization design principles unified in an agent demonstrating autonomous, reactive, proactive, and social behavior. Evaluation across five scenarios spanning multiple engineering domains with 12 evaluators demonstrates 206% improvement in output quality, with our agent achieving expert-level ratings in all cases versus baseline's poor performance, while maintaining superior code quality with lower variance. Our contributions are: an automated agent-based system for visualization generation and a validated framework for systematically capturing human domain knowledge and codifying tacit expert knowledge into AI agents, demonstrating that non-experts can achieve expert-level outcomes in specialized domains.