🤖 AI Summary
Large language model (LLM) inference services incur substantial carbon emissions, yet prior work lacks systematic quantification of operational carbon (GPU-dominated) versus embodied carbon (CPU/memory/storage-dominated) under real production workloads.
Method: This paper introduces the first holistic carbon-aware infrastructure framework for LLM inference, grounded in four design principles—reduction, reuse, adaptation, and recycling—and spanning the full hardware lifecycle. It integrates hardware-level energy efficiency modeling, time-varying carbon intensity–aware scheduling, dynamic resource orchestration, and elastic batching, all calibrated on production-scale generative AI trace data.
Contribution/Results: Unlike conventional approaches optimizing only energy or latency, our framework jointly minimizes total carbon emissions while meeting strict performance SLOs. Evaluations against state-of-the-art baselines demonstrate up to 47% reduction in total carbon footprint, significantly improving carbon efficiency—the ratio of computational output to carbon emitted—without compromising service quality.
📝 Abstract
The rapid increase in LLM ubiquity and scale levies unprecedented demands on computing infrastructure. These demands not only incur large compute and memory resources, but also significant energy, yielding large operational and embodied carbon emissions. In this work, we present two main observations. First, while GPUs dominate operational carbon, host processing systems (e.g., CPUs, memory, storage) dominate embodied carbon. Second, based on traces from production deployment of two Generative AI services in the cloud, offline, batch-inference accounts for a significant portion (up to 55%) of serving capacity. We propose four pillars of carbon-conscious infrastructure design for LLM serving systems: extbf{ extit{Reduce, Reuse, Rightsize, and Recycle}}. We demonstrate that EcoServe can lower carbon emissions by up to 47%, compared to performance, energy, and cost-optimized design points, while maintaining performance targets and SLOs.