🤖 AI Summary
Scientific workflows increasingly rely on the edge–cloud–HPC continuum, generating large-scale, structurally complex provenance data; existing analysis approaches—based on scripts, SQL, or static dashboards—suffer from poor interactivity and weak semantic understanding. To address this, we propose the first LLM-based agent system specifically designed for workflow provenance analysis. Our method introduces a modular reference architecture and a dedicated evaluation framework, integrating prompt tuning, retrieval-augmented generation (RAG), and natural language-to-structured query translation to enable deep semantic parsing of provenance metadata and generate insights beyond raw log analysis. The system adopts a lightweight, metadata-driven design and supports multiple foundation models—including LLaMA, GPT, Gemini, and Claude. Evaluated on real-world chemical workflows, it achieves significantly higher query accuracy and analytical depth, enabling dynamic, natural language–driven, interactive provenance exploration.
📝 Abstract
Modern scientific discovery increasingly relies on workflows that process data across the Edge, Cloud, and High Performance Computing (HPC) continuum. Comprehensive and in-depth analyses of these data are critical for hypothesis validation, anomaly detection, reproducibility, and impactful findings. Although workflow provenance techniques support such analyses, at large scale, the provenance data become complex and difficult to analyze. Existing systems depend on custom scripts, structured queries, or static dashboards, limiting data interaction. In this work, we introduce an evaluation methodology, reference architecture, and open-source implementation that leverages interactive Large Language Model (LLM) agents for runtime data analysis. Our approach uses a lightweight, metadata-driven design that translates natural language into structured provenance queries. Evaluations across LLaMA, GPT, Gemini, and Claude, covering diverse query classes and a real-world chemistry workflow, show that modular design, prompt tuning, and Retrieval-Augmented Generation (RAG) enable accurate and insightful LLM agent responses beyond recorded provenance.