🤖 AI Summary
In large language model (LLM) serving, autoscaling faces a fundamental trade-off between high scaling latency and substantial parameter caching overhead: conventional approaches rely on local parameter caches, causing service interruption during model loading and suffering from data-plane bottlenecks in cross-host scaling. This paper proposes a local-cache-free, millisecond-scale real-time autoscaling framework. It introduces the first O(1)-complexity network-based direct parameter transmission mechanism, enabling zero-copy parameter loading over GPU-to-GPU high-speed interconnects. We design a layer-granularity dynamic collaborative execution architecture supporting fine-grained load migration and multicast-based parameter distribution. Integrated cooperative inference scheduling eliminates cold-start delays. Experiments show up to 86% reduction in tail latency, achieving performance close to the ideal full-host, full-parameter caching configuration—while completely eliminating local parameter storage overhead.
📝 Abstract
Model autoscaling is the key mechanism to achieve serverless model-as-a-service, but it faces a fundamental trade-off between scaling speed and storage/memory usage to cache parameters, and cannot meet frequent scaling requirements across multiple hosts. The key problem is that data plane performance is slow, and scaled instances remain stopped while parameters are loading. We first show that data plane can be made fast with no/O(1) caching by loading parameters through the compute network between GPUs because: (1) its speed is comparable host cache and is underutilized; (2) scaling multiple instances requires no or O(1) caching with network-optimized multicast. Second, autoscaling can be made live by breaking the scaling abstraction from a coarse-grained instance-level to a fine-grained layer-level. This allows us to offload the layer computation from the overloaded serving instances to the scaled instance with cooperative execution, thus handles cases even when the compute network is not sufficiently fast. Our system BLITZSCALE reduces the serving tail latencies by up to 86% without caching, and we achieve comparable performance (or even better) to an optimal setup where all the parameters are cached at all the host for autoscaling.