Scalable Engine and the Performance of Different LLM Models in a SLURM based HPC architecture

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address challenges in high-performance computing (HPC) environments—including heterogeneous LLM deployment, inflexible resource scheduling, and significant performance volatility under multi-model concurrent inference—this paper proposes a scalable LLM inference engine architecture built atop SLURM. The architecture integrates containerized microservices with dynamic resource orchestration, enabling fine-grained, coordinated allocation of CPU, GPU, and memory resources, and provides unified access via RESTful APIs to support both batch and interactive inference workloads. A novel multi-step “tribunal” refinement workflow is introduced to enhance fault tolerance and operational flexibility. Experiments on Llama-series models across multi-node HPC clusters demonstrate sub-50 ms latency and 128 concurrent requests for smaller models (e.g., Llama-3-8B), and stable dual-concurrent execution for large models (e.g., Llama-3-70B), with low scheduling overhead and strong horizontal scalability. The system has been successfully deployed in production applications, including retrieval-augmented generation chatbots.

Technology Category

Application Category

📝 Abstract
This work elaborates on a High performance computing (HPC) architecture based on Simple Linux Utility for Resource Management (SLURM) [1] for deploying heterogeneous Large Language Models (LLMs) into a scalable inference engine. Dynamic resource scheduling and seamless integration of containerized microservices have been leveraged herein to manage CPU, GPU, and memory allocations efficiently in multi-node clusters. Extensive experiments, using Llama 3.2 (1B and 3B parameters) [2] and Llama 3.1 (8B and 70B) [3], probe throughput, latency, and concurrency and show that small models can handle up to 128 concurrent requests at sub-50 ms latency, while for larger models, saturation happens with as few as two concurrent users, with a latency of more than 2 seconds. This architecture includes Representational State Transfer Application Programming Interfaces (REST APIs) [4] endpoints for single and bulk inferences, as well as advanced workflows such as multi-step "tribunal" refinement. Experimental results confirm minimal overhead from container and scheduling activities and show that the approach scales reliably both for batch and interactive settings. We further illustrate real-world scenarios, including the deployment of chatbots with retrievalaugmented generation, which helps to demonstrate the flexibility and robustness of the architecture. The obtained results pave ways for significantly more efficient, responsive, and fault-tolerant LLM inference on large-scale HPC infrastructures.
Problem

Research questions and friction points this paper is trying to address.

Developing scalable HPC architecture for efficient LLM deployment
Optimizing resource allocation for heterogeneous models in clusters
Evaluating performance metrics across various model sizes
Innovation

Methods, ideas, or system contributions that make the work stand out.

SLURM-based HPC architecture for scalable LLM deployment
Dynamic resource scheduling for CPU, GPU, memory allocation
Containerized microservices with REST APIs for inference
🔎 Similar Papers
No similar papers found.
A
Anderson de Lima Luiz
AImotion Bavaria, Germany; Technische Hochschule Ingolstadt
S
Shubham Vijay Kurlekar
AImotion Bavaria, Germany; Technische Hochschule Ingolstadt
Munir Georges
Munir Georges
Technische Hochschule Ingolstadt
speech recognitionspoken language understandingspeaker identificationdeep learning