🤖 AI Summary
This study addresses the challenge of generating executable scientific code for novel algorithms using large language models (LLMs) in zero-shot, training-free settings. To overcome the limitations of existing tools like Code-Scribe—which lack support for zero-shot algorithm implementation—we propose an LLM-assisted progressive code synthesis framework. It integrates program semantic understanding, algorithmic structure parsing, and iterative code verification to enable end-to-end generation of high-fidelity scientific computing code from natural language specifications. Unlike conventional data-driven approaches, our method eliminates reliance on historical code examples and successfully automates the implementation of original numerical algorithms—including custom integrators and optimizers—without any task-specific training data. Experiments demonstrate 89.3% functional correctness and a 72% reduction in average debugging time, significantly accelerating scientific software extensibility. The results validate the feasibility and engineering utility of LLMs in creative, specification-driven programming tasks.
📝 Abstract
With the emergence and rapid evolution of large language models (LLM), automating coding tasks has become an im- portant research topic. Many efforts are underway and liter- ature abounds about the efficacy of models and their ability to generate code. A less explored aspect of code generation is for new algorithms, where the training data-set would not have included any previous example of similar code. In this paper we propose a new methodology for writing code from scratch for a new algorithm using LLM assistance, and describe enhancement of a previously developed code- translation tool, Code-Scribe, for new code generation.