🤖 AI Summary
This work addresses the inherent trade-off between latency and throughput in deploying dense large language models (e.g., Llama-3.1-70B/405B), particularly when model size exceeds device memory, where parallelization strategies critically impact performance. The study systematically evaluates tensor parallelism (TP), pipeline parallelism (PP), and their hybrid configurations for single-node inference, revealing that TP is more effective at reducing latency while PP better enhances throughput. Building on this insight, the authors propose dynamically adjusting the TP–PP ratio to finely tune the latency–throughput trade-off. Experimental results demonstrate that this approach enables system optimization tailored to diverse service objectives—such as meeting strict SLA requirements or maximizing throughput—thereby offering clear architectural guidance for deploying dense LLMs in practice.
📝 Abstract
Breakthroughs in the generative AI domain have fueled an explosion of large language model (LLM)-powered applications, whose workloads fundamentally consist of sequences of inferences through transformer architectures. Within this rapidly expanding ecosystem, dense LLMs--those that activate all model parameters for each token generation--form the foundation for advanced expert-based variants. Dense models continue to dominate because of their strong generalization ability, scalability, ease of fine-tuning, and versatility across diverse tasks. In LLM inference systems, performance is mainly characterized by latency, response time, and throughput (i.e., tokens generated per unit of time). Latency and throughput are inherently coupled: optimizing for one often comes at the expense of the other. Moreover, batching strategies and parallelism configurations, which are essential when dense model parameters exceed device memory capacity, can significantly affect both latency and overall system throughput. This paper (i) investigates the workloads of two representative dense LLMs--Llama-3.1-70B and Llama-3.1-405B, focusing in particular on intra-node parallelization schemes, (ii) analyzes how input characteristics, batching, and parallelism strategies influence latency flexibility and the latency-throughput tradeoff, and (iii) identifies key performance bottlenecks that inform design choices for meeting service-level agreements (SLAs) and sustaining inference quality. Our empirical evaluations reveal that Tensor Parallelism (TP) improves the latency objectives while Pipeline Parallelism (PP) is better-suited for throughput-oriented applications. We highlight that their hybrid usage by controlling the TP and PP degrees provides control over the latency-throughput interplay.