LASSI: An LLM-Based Automated Self-Correcting Pipeline for Translating Parallel Scientific Codes

📅 2024-06-30
🏛️ 2024 IEEE International Conference on Cluster Computing Workshops (CLUSTER Workshops)
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) for scientific computing suffer from a scarcity of large-scale, high-quality parallel code training data. Method: This paper proposes an end-to-end automated bidirectional parallel code translation framework supporting OpenMP–CUDA interconversion. Its core innovation is a compiler-execution feedback-driven self-correcting closed-loop mechanism: structured prompt engineering guides LLMs (e.g., CodeLlama) to generate code, while Clang/NVCC compilation validation and runtime performance evaluation provide error feedback—enabling iterative debugging and refactoring without human annotation. Results: Experimental evaluation shows functional correctness rates of 80% and performance compliance rates of 78% for OpenMP→CUDA translation; for CUDA→OpenMP, the corresponding rates are 85% and 62%. The framework significantly improves executability, functional fidelity, and runtime efficiency of generated parallel code.

Technology Category

Application Category

📝 Abstract
This paper addresses the problem of providing a novel approach to sourcing significant training data for LLMs focused on science and engineering. In particular, a crucial challenge is sourcing parallel scientific codes in the ranges of millions to billions of codes. To tackle this problem, we propose an automated pipeline framework called LASSI, designed to translate between parallel programming languages by bootstrapping existing closed- or open-source LLMs. LASSI incorporates autonomous enhancement through self-correcting loops where errors encountered during the compilation and execution of generated code are fed back to the LLM through guided prompting for debugging and refactoring. We highlight the bidirectional translation of existing GPU benchmarks between OpenMP target offload and CUDA to validate LASSI. The results of evaluating LASSI with different application codes across four LLMs demonstrate the effectiveness of LASSI for generating executable parallel codes, with 80% of OpenMP to CUDA translations and 85% of CUDA to OpenMP translations producing the expected output. We also observe approximately 78% of OpenMP to CUDA translations and 62% of CUDA to OpenMP translations execute within 10% of or at a faster runtime than the original benchmark code in the same language.
Problem

Research questions and friction points this paper is trying to address.

Automating translation between parallel programming languages for scientific codes
Generating executable parallel codes using LLM-based self-correcting pipelines
Enhancing translation accuracy and runtime performance for GPU benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based automated pipeline for code translation
Self-correcting loops enhance translation accuracy
Bi-directional translation between OpenMP and CUDA
🔎 Similar Papers
No similar papers found.