🤖 AI Summary
Existing systems struggle to simultaneously satisfy heterogeneous and conflicting runtime requirements—such as simulation, model training, high-throughput inference, and agent-based control—in AI-HPC hybrid workflows. This paper introduces RHAPSODY, a multi-runtime middleware that pioneers a unified orchestration paradigm based on four-dimensional abstractions: tasks, services, resources, and policies. RHAPSODY enables cooperative scheduling of MPI, AI-serving runtimes (e.g., vLLM), and fine-grained task runtimes without replacing legacy infrastructure. Its core components include a unified resource abstraction layer, a low-overhead scheduler, a Dragon/vLLM integration framework, and an HPC-AI co-execution policy engine. Evaluation shows zero non-negligible runtime overhead, scalability to thousands of heterogeneous nodes, near-linear speedup for high-throughput inference, and a 40% reduction in data-control coupling latency for agent workflows. RHAPSODY has been deployed and validated on multiple exascale supercomputing platforms.
📝 Abstract
Hybrid AI-HPC workflows combine large-scale simulation, training, high-throughput inference, and tightly coupled, agent-driven control within a single execution campaign. These workflows impose heterogeneous and often conflicting requirements on runtime systems, spanning MPI executables, persistent AI services, fine-grained tasks, and low-latency AI-HPC coupling. Existing systems typically address only subsets of these requirements, limiting their ability to support emerging AI-HPC applications at scale. We present RHAPSODY, a multi-runtime middleware that enables concurrent execution of heterogeneous AI-HPC workloads through uniform abstractions for tasks, services, resources, and execution policies. Rather than replacing existing runtimes, RHAPSODY composes and coordinates them, allowing simulation codes, inference services, and agentic workflows to coexist within a single job allocation on leadership-class HPC platforms. We evaluate RHAPSODY with Dragon and vLLM on multiple HPC systems using representative heterogeneous, inference-at-scale, and tightly coupled AI-HPC workflows. Our results show that RHAPSODY introduces minimal runtime overhead, sustains increasing heterogeneity at scale, achieves near-linear scaling for high-throughput inference workloads, and data- and control-efficient coupling between AI and HPC tasks in agentic workflows.