Toward Automated and Trustworthy Scientific Analysis and Visualization with LLM-Generated Code

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Domain scientists often lack sufficient programming expertise to conduct data analysis efficiently. This paper addresses the low reliability and poor trustworthiness of large language models (LLMs) in scientific code generation by introducing the first benchmark suite for Python-based data analysis and visualization grounded in real-world research tasks. We propose three synergistic strategies: data-aware prompt disambiguation, retrieval-augmented prompt optimization, and iterative error repair—integrated with retrieval-augmented generation (RAG) and automated execution validation. Experiments demonstrate substantial improvements in code executability and functional correctness. However, domain-context understanding remains a critical bottleneck. This work contributes both a reusable, realistic evaluation benchmark and a systematic technical framework for developing trustworthy AI-powered scientific tools.

Technology Category

Application Category

📝 Abstract
As modern science becomes increasingly data-intensive, the ability to analyze and visualize large-scale, complex datasets is critical to accelerating discovery. However, many domain scientists lack the programming expertise required to develop custom data analysis workflows, creating barriers to timely and effective insight. Large language models (LLMs) offer a promising solution by generating executable code from natural language descriptions. In this paper, we investigate the trustworthiness of open-source LLMs in autonomously producing Python scripts for scientific data analysis and visualization. We construct a benchmark suite of domain-inspired prompts that reflect real-world research tasks and systematically evaluate the executability and correctness of the generated code. Our findings show that, without human intervention, the reliability of LLM-generated code is limited, with frequent failures caused by ambiguous prompts and the models' insufficient understanding of domain-specific contexts. To address these challenges, we design and assess three complementary strategies: data-aware prompt disambiguation, retrieval-augmented prompt enhancement, and iterative error repair. While these methods significantly improve execution success rates and output quality, further refinement is needed. This work highlights both the promise and current limitations of LLM-driven automation in scientific workflows and introduces actionable techniques and a reusable benchmark for building more inclusive, accessible, and trustworthy AI-assisted research tools.
Problem

Research questions and friction points this paper is trying to address.

LLMs generate code for scientific data analysis and visualization
Trustworthiness of LLM-generated code is limited without human intervention
Strategies improve code reliability but need further refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-aware prompt disambiguation for code generation
Retrieval-augmented prompt enhancement to improve context
Iterative error repair for reliable scientific workflows
🔎 Similar Papers
No similar papers found.
A
Apu Kumar Chakroborti
Georgia State University
Y
Yi Ding
Georgia State University
Lipeng Wan
Lipeng Wan
Georgia State University
Scientific Data ManagementHPCData-Intensive ComputingStorage and I/OSystem Resilience