🤖 AI Summary
To address challenges in high-performance computing (HPC) environments—including heterogeneous LLM deployment, inflexible resource scheduling, and significant performance volatility under multi-model concurrent inference—this paper proposes a scalable LLM inference engine architecture built atop SLURM. The architecture integrates containerized microservices with dynamic resource orchestration, enabling fine-grained, coordinated allocation of CPU, GPU, and memory resources, and provides unified access via RESTful APIs to support both batch and interactive inference workloads. A novel multi-step “tribunal” refinement workflow is introduced to enhance fault tolerance and operational flexibility. Experiments on Llama-series models across multi-node HPC clusters demonstrate sub-50 ms latency and 128 concurrent requests for smaller models (e.g., Llama-3-8B), and stable dual-concurrent execution for large models (e.g., Llama-3-70B), with low scheduling overhead and strong horizontal scalability. The system has been successfully deployed in production applications, including retrieval-augmented generation chatbots.
📝 Abstract
This work elaborates on a High performance computing (HPC) architecture based on Simple Linux Utility for Resource Management (SLURM) [1] for deploying heterogeneous Large Language Models (LLMs) into a scalable inference engine. Dynamic resource scheduling and seamless integration of containerized microservices have been leveraged herein to manage CPU, GPU, and memory allocations efficiently in multi-node clusters. Extensive experiments, using Llama 3.2 (1B and 3B parameters) [2] and Llama 3.1 (8B and 70B) [3], probe throughput, latency, and concurrency and show that small models can handle up to 128 concurrent requests at sub-50 ms latency, while for larger models, saturation happens with as few as two concurrent users, with a latency of more than 2 seconds. This architecture includes Representational State Transfer Application Programming Interfaces (REST APIs) [4] endpoints for single and bulk inferences, as well as advanced workflows such as multi-step "tribunal" refinement. Experimental results confirm minimal overhead from container and scheduling activities and show that the approach scales reliably both for batch and interactive settings. We further illustrate real-world scenarios, including the deployment of chatbots with retrievalaugmented generation, which helps to demonstrate the flexibility and robustness of the architecture. The obtained results pave ways for significantly more efficient, responsive, and fault-tolerant LLM inference on large-scale HPC infrastructures.