Evaluating the Efficacy of LLM-Based Reasoning for Multiobjective HPC Job Scheduling

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In high-performance computing (HPC) multi-objective scheduling, conflicting objectives—such as makespan, fairness, and resource utilization—coupled with the inability of traditional schedulers to adapt to dynamic, heterogeneous environments pose significant challenges. To address this, we propose the first large language model (LLM)-based, ReAct-style interpretable scheduler for HPC. Without domain-specific fine-tuning, it performs multi-objective co-optimization via natural language reasoning and iterative action execution. We introduce two key innovations: a traceable scratchpad memory mechanism for decision provenance and a constraint-enforced verification module to ensure safety and interpretability. Evaluated on seven real-world HPC workloads, our method significantly outperforms FCFS, SJF, and OR-Tools in balancing competing objectives and satisfying hard constraints. However, it entails non-negligible inference latency, presenting a trade-off between optimality and real-time responsiveness.

Technology Category

Application Category

📝 Abstract
High-Performance Computing (HPC) job scheduling involves balancing conflicting objectives such as minimizing makespan, reducing wait times, optimizing resource use, and ensuring fairness. Traditional methods, including heuristic-based (e.g., First-Come-First-Served) or intensive optimization techniques, often lack adaptability to dynamic workloads and heterogeneous HPC systems. To address this, we propose a novel Large Language Model (LLM)-based scheduler using a ReAct-style framework (Reason + Act), enabling iterative, interpretable decision-making. The system incorporates a scratchpad memory to track scheduling history and refine decisions via natural language feedback, while a constraint enforcement module ensures feasibility and safety. We evaluate our approach using OpenAI's O4-Mini and Anthropic's Claude 3.7 across seven real-world HPC workload scenarios, including heterogeneous mixes, bursty patterns, and adversarial cases. Comparisons against FCFS, Shortest Job First, and Google OR-Tools (on 10 to 100 jobs) reveal that LLM-based scheduling effectively balances multiple objectives while offering transparent reasoning through natural language traces. The method excels in constraint satisfaction and adapts to diverse workloads without domain-specific training. However, a trade-off between reasoning quality and computational overhead challenges real-time deployment. This work presents the first comprehensive study of reasoning-capable LLMs for HPC scheduling, demonstrating their potential to handle multiobjective optimization while highlighting limitations in computational efficiency. The findings provide insights into leveraging advanced language models for complex scheduling problems in dynamic HPC environments.
Problem

Research questions and friction points this paper is trying to address.

Balancing conflicting objectives in HPC job scheduling
Addressing adaptability gaps in traditional scheduling methods
Evaluating LLM-based schedulers for multiobjective optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based scheduler with ReAct framework
Scratchpad memory for scheduling history tracking
Constraint enforcement module ensures feasibility
🔎 Similar Papers
No similar papers found.