🤖 AI Summary
Domain scientists often lack sufficient programming expertise to conduct data analysis efficiently. This paper addresses the low reliability and poor trustworthiness of large language models (LLMs) in scientific code generation by introducing the first benchmark suite for Python-based data analysis and visualization grounded in real-world research tasks. We propose three synergistic strategies: data-aware prompt disambiguation, retrieval-augmented prompt optimization, and iterative error repair—integrated with retrieval-augmented generation (RAG) and automated execution validation. Experiments demonstrate substantial improvements in code executability and functional correctness. However, domain-context understanding remains a critical bottleneck. This work contributes both a reusable, realistic evaluation benchmark and a systematic technical framework for developing trustworthy AI-powered scientific tools.
📝 Abstract
As modern science becomes increasingly data-intensive, the ability to analyze and visualize large-scale, complex datasets is critical to accelerating discovery. However, many domain scientists lack the programming expertise required to develop custom data analysis workflows, creating barriers to timely and effective insight. Large language models (LLMs) offer a promising solution by generating executable code from natural language descriptions. In this paper, we investigate the trustworthiness of open-source LLMs in autonomously producing Python scripts for scientific data analysis and visualization. We construct a benchmark suite of domain-inspired prompts that reflect real-world research tasks and systematically evaluate the executability and correctness of the generated code. Our findings show that, without human intervention, the reliability of LLM-generated code is limited, with frequent failures caused by ambiguous prompts and the models' insufficient understanding of domain-specific contexts. To address these challenges, we design and assess three complementary strategies: data-aware prompt disambiguation, retrieval-augmented prompt enhancement, and iterative error repair. While these methods significantly improve execution success rates and output quality, further refinement is needed. This work highlights both the promise and current limitations of LLM-driven automation in scientific workflows and introduces actionable techniques and a reusable benchmark for building more inclusive, accessible, and trustworthy AI-assisted research tools.