AutoParLLM: GNN-guided Context Generation for Zero-Shot Code Parallelization using LLMs

📅 2023-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
For zero-shot code parallelization, this paper proposes a Graph Neural Network (GNN)-guided context generation method that models program dependency structures to synthesize high-quality In-Context Learning (ICL) examples for large language models (LLMs), thereby enhancing their ability to generate efficient parallel code. Key contributions include: (1) the first GNN-driven ICL context generation paradigm; (2) a novel parallelism-aware evaluation metric, *ourscore*, tailored to parallel code quality; and (3) a comprehensive evaluation framework integrating CodeBERTScore with parallelism-specific metrics. Experiments on the NAS and Rodinia benchmarks demonstrate significant improvements over the GPT-4 baseline: CodeBERTScore increases by 19.9% and 6.48%, respectively, while measured speedup improves by approximately 17% and 16%.
📝 Abstract
In-Context Learning (ICL) has been shown to be a powerful technique to augment the capabilities of LLMs for a diverse range of tasks. This work proposes ourtool, a novel way to generate context using guidance from graph neural networks (GNNs) to generate efficient parallel codes. We evaluate ourtool xspace{} on $12$ applications from two well-known benchmark suites of parallel codes: NAS Parallel Benchmark and Rodinia Benchmark. Our results show that ourtool xspace{} improves the state-of-the-art LLMs (e.g., GPT-4) by 19.9% in NAS and 6.48% in Rodinia benchmark in terms of CodeBERTScore for the task of parallel code generation. Moreover, ourtool xspace{} improves the ability of the most powerful LLM to date, GPT-4, by achieving $approx$17% (on NAS benchmark) and $approx$16% (on Rodinia benchmark) better speedup. In addition, we propose ourscore xspace{} for evaluating the quality of the parallel code and show its effectiveness in evaluating parallel codes. ourtool xspace is available at https://github.com/quazirafi/AutoParLLM.git.
Problem

Research questions and friction points this paper is trying to address.

Enhance LLMs for zero-shot code parallelization.
Generate efficient parallel codes using GNN guidance.
Improve GPT-4 performance in code generation benchmarks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

GNN-guided context generation
Zero-shot code parallelization
Improves GPT-4 performance significantly
🔎 Similar Papers
No similar papers found.