🤖 AI Summary
The exponential growth of large language models (LLMs) has outpaced hardware and system capabilities, severely limiting their scalability in distributed environments. Method: We propose the first unified performance–cost co-modeling framework for LLM training and inference, integrating Flash Attention, Mixture-of-Experts (MoE) architectures, and a novel 5D parallel communication algorithm, while introducing— for the first time—a chip-level cost model. Our framework combines analytical performance modeling, memory-bandwidth-aware optimization, network-topology-adaptive communication, and fine-grained cost estimation to enable quantitative trade-off analysis across heterogeneous hardware architectures. Contribution/Results: Experiments demonstrate accurate prediction of throughput, latency, and per-FLOP cost across diverse configurations. The framework provides reproducible, scalable, and architecture-agnostic decision support for LLM-specific system design and hardware–software co-optimization.
📝 Abstract
Large language models (LLMs), based on transformer architectures, have revolutionized numerous domains within artificial intelligence, science, and engineering due to their exceptional scalability and adaptability. However, the exponential growth in LLM size and complexity has outpaced advancements in compute capacity, memory bandwidth, network performance, and cost efficiency, posing significant challenges to their scalability on distributed systems. To address these limitations, alternative model architectures, optimization strategies, communication-aware network topologies, and novel system design approaches have been proposed in literature. This paper introduces a performance-cost modeling methodology for LLM training and inference that integrates state-of-the-art compute techniques with memory optimizations, and latest communication techniques. Building on an analytical performance model, our approach incorporates recent innovations such as the flash attention technique and mixture of experts models to address the memory bandwidth and compute bottlenecks. It also considers the impact of different network topologies and topology-specific communication algorithms with 5D parallellism. The framework also integrates a chiplet cost model. The proposed modeling methodology provides valuable insights to guide future compute system design and facilitates hardware-software co-development, in particular due to its ability to analyze performance-cost trade-offs for various system architectural configurations.