🤖 AI Summary
To address GPU resource underutilization, adaptation latency, and degraded service quality arising from the isolated deployment of LLM inference and retraining, this paper proposes LeMix—a unified scheduling framework. LeMix systematically identifies and exploits synergistic optimization opportunities between inference and training co-located on multi-GPU systems—an insight previously unexplored. It integrates offline performance modeling, execution-time prediction, and fine-grained runtime scheduling to dynamically coordinate heterogeneous workloads and mitigate cross-task interference. Experimental results demonstrate that, compared to conventional isolated deployment, LeMix achieves up to a 3.53× improvement in end-to-end throughput, reduces inference accuracy degradation to 0.61×, and increases SLO compliance rate by 2.12×—effectively balancing stringent latency guarantees with high hardware utilization.
📝 Abstract
Modern deployment of large language models (LLMs) frequently involves both inference serving and continuous retraining to stay aligned with evolving data and user feedback. Common practices separate these workloads onto distinct servers in isolated phases, causing substantial inefficiencies (e.g., GPU idleness) and delayed adaptation to new data in distributed settings. Our empirical analysis reveals that these inefficiencies stem from dynamic request arrivals during serving and workload heterogeneity in pipeline-parallel training. To address these challenges, we propose LeMix, a system for co-locating and managing concurrent LLM serving and training workloads. LeMix integrates offline profiling, execution prediction mechanisms, and runtime scheduling to dynamically adapt resource allocation based on workload characteristics and system conditions. By understanding task-specific behaviors and co-execution interference across shared nodes, LeMix improves utilization and serving quality without compromising serving responsiveness. Our evaluation shows that LeMix improves throughput by up to 3.53x, reduces inference loss by up to 0.61x, and delivers up to 2.12x higher response time SLO attainment over traditional separate setups. To our knowledge, this is the first work to uncover and exploit the opportunities of joint LLM inference and training, paving the way for more resource-efficient deployment of LLMs in production environments.