Evaluating Kubernetes Performance for GenAI Inference: From Automatic Speech Recognition to LLM Summarization

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the efficient support of generative AI inference workloads on Kubernetes, balancing performance and resource efficiency across both batch and online scenarios. We design a multi-stage inference pipeline that integrates automatic speech recognition (Whisper) with large language model summarization, and for the first time, cohesively combines Kueue, Dynamic Accelerator Slicing (DAS), and Gateway API Inference Extension (GAIE) to enable unified scheduling and high-performance execution. Experimental results demonstrate substantial improvements in system efficiency: Kueue reduces total job completion time by up to 15%, DAS decreases average job completion time by 36%, and GAIE accelerates time-to-first-token by 82%.

Technology Category

Application Category

📝 Abstract
As Generative AI (GenAI), particularly inference, rapidly emerges as a dominant workload category, the Kubernetes ecosystem is proactively evolving to natively support its unique demands. This industry paper demonstrates how emerging Kubernetes-native projects can be combined to deliver the benefits of container orchestration, such as scalability and resource efficiency, to complex AI workflows. We implement and evaluate an illustrative, multi-stage use case consisting of automatic speech recognition and summarization. First, we address batch inference by using Kueue to manage jobs that transcribe audio files with Whisper models and Dynamic Accelerator Slicer (DAS) to increase parallel job execution. Second, we address a discrete online inference scenario by feeding the transcripts to a Large Language Model for summarization hosted using llm-d, a novel solution utilizing the recent developments around the Kubernetes Gateway API Inference Extension (GAIE) for optimized routing of inference requests. Our findings illustrate that these complementary components (Kueue, DAS, and GAIE) form a cohesive, high-performance platform, proving Kubernetes'capability to serve as a unified foundation for demanding GenAI workloads: Kueue reduced total makespan by up to 15%; DAS shortened mean job completion time by 36\%; and GAIE working in conjunction with llm-d improved tail Time to First Token latency by up to 90% even under high loads.
Problem

Research questions and friction points this paper is trying to address.

Generative AI
Kubernetes
Inference
Orchestration
LLM
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kubernetes-native AI
Dynamic Accelerator Slicer
Gateway API Inference Extension
GenAI inference orchestration
Kueue
🔎 Similar Papers
No similar papers found.