🤖 AI Summary
In industrial-scale large language model (LLM) training, frequent failures and iterative debugging induce substantial startup overhead—accounting for over 3.5% of total GPU time—primarily due to slow initialization. Method: This work is the first to systematically characterize LLM training initialization bottlenecks using real production traces, identifying container image loading, runtime dependency installation, and checkpoint restoration as the three dominant latency sources. We propose BootSeer, a system-level optimization framework that jointly integrates hot-block prefetching, lightweight dependency snapshotting, and striped HDFS-FUSE storage—requiring no modifications to training code. Contribution/Results: Deployed on a production thousand-GPU cluster, BootSeer reduces end-to-end startup latency by 50%, significantly improving GPU utilization. The solution has been adopted in large-scale production environments.
📝 Abstract
Large Language Models (LLMs) have become a cornerstone of modern AI, driving breakthroughs in natural language processing and expanding into multimodal jobs involving images, audio, and video. As with most computational software, it is important to distinguish between ordinary runtime performance and startup overhead. Prior research has focused on runtime performance: improving training efficiency and stability. This work focuses instead on the increasingly critical issue of startup overhead in training: the delay before training jobs begin execution. Startup overhead is particularly important in large, industrial-scale LLMs, where failures occur more frequently and multiple teams operate in iterative update-debug cycles. In one of our training clusters, more than 3.5% of GPU time is wasted due to startup overhead alone.
In this work, we present the first in-depth characterization of LLM training startup overhead based on real production data. We analyze the components of startup cost, quantify its direct impact, and examine how it scales with job size. These insights motivate the design of Bootseer, a system-level optimization framework that addresses three primary startup bottlenecks: (a) container image loading, (b) runtime dependency installation, and (c) model checkpoint resumption. To mitigate these bottlenecks, Bootseer introduces three techniques: (a) hot block record-and-prefetch, (b) dependency snapshotting, and (c) striped HDFS-FUSE. Bootseer has been deployed in a production environment and evaluated on real LLM training workloads, demonstrating a 50% reduction in startup overhead.