Improving Parallel Program Performance with LLM Optimizers via Agent-System Interface

📅 2024-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high barrier and time consumption in performance tuning of parallel programs—stemming from domain scientists’ limited systems programming expertise—this paper proposes an automated mapper generation framework for high-performance computing. Our method introduces three key innovations: (1) an Agent-System Interface that decouples domain logic from system-level optimizations; (2) a hybrid optimization strategy integrating domain-specific language (DSL)-based structured search space modeling with large language model (LLM)-driven generative optimization; and (3) AutoGuide, a few-shot feedback mechanism leveraging semantic parsing of execution logs—requiring only 10 iterations. Evaluated on nine scientific computing benchmarks, our generated mappers achieve up to 1.34× speedup over expert-written implementations and 3.8× acceleration over OpenTuner (with 1,000 iterations), while reducing tuning time from days to minutes.

Technology Category

Application Category

📝 Abstract
Modern scientific discovery increasingly relies on high-performance computing for complex modeling and simulation. A key challenge in improving parallel program performance is efficiently mapping tasks to processors and data to memory, a process dictated by intricate, low-level system code known as mappers. Developing high-performance mappers demands days of manual tuning, posing a significant barrier for domain scientists without systems expertise. We introduce a framework that automates mapper development with generative optimization, leveraging richer feedback beyond scalar performance metrics. Our approach features the Agent-System Interface, which includes a Domain-Specific Language (DSL) to abstract away low-level complexity of system code and define a structured search space, as well as AutoGuide, a mechanism that interprets raw execution output into actionable feedback. Unlike traditional reinforcement learning methods such as OpenTuner, which rely solely on scalar feedback, our method finds superior mappers in far fewer iterations. With just 10 iterations, it outperforms OpenTuner even after 1000 iterations, achieving 3.8X faster performance. Our approach finds mappers that surpass expert-written mappers by up to 1.34X speedup across nine benchmarks while reducing tuning time from days to minutes.
Problem

Research questions and friction points this paper is trying to address.

Multitasking Efficiency
Resource Allocation
Scientific Computing
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM optimizer
workload distribution
memory management
🔎 Similar Papers
No similar papers found.