๐ค AI Summary
In existing online continual learning systems, the separation of training and inference leads to redundant recomputation of intermediate activations already computed during inference, causing 30โ42% training time overhead. This work proposes the first efficient online continual learning system, introducing a novel โinference-training co-designโ paradigm for activation reuse. We design a minimal-activation logging mechanism during pre-filling, coupled with a GPU memory management strategy that combines forward-time activation release and backward-time on-demand reloading. This enables selective activation logging, dynamic offloading/reloading, decoupled forward-backward scheduling, and low-overhead integration with inference services. Experiments demonstrate a 1.72ร improvement in training throughput, a 47% reduction in GPU memory footprint, doubling of the maximum trainable sequence length (in tokens), and negligible inference latency overhead.
๐ Abstract
Continual learning has emerged as a promising solution to refine models incrementally by leveraging user feedback, thereby enhancing model performance in applications like code completion, personal assistants, and chat interfaces. In particular, online continual learning - iteratively training the model with small batches of user feedback - has demonstrated notable performance improvements. However, the existing practice of segregating training and serving processes forces the online trainer to recompute the intermediate results already done during serving. Such redundant computations can account for 30%-42% of total training time. In this paper, we propose Alchemist, to the best of our knowledge, the first online continual learning system that efficiently reuses intermediate results computed during serving to reduce redundant computation with minimal impact on the serving latency or capacity. Alchemist introduces two key techniques: (1) minimal activations recording and saving during serving, where activations are recorded and saved only during the prefill phase to minimize overhead; and (2) offloading of serving activations, which dynamically manages GPU memory by freeing activations in the forward order, while reloading them in the backward order during the backward pass. Evaluations with the ShareGPT dataset show that compared with a separate training cluster, Alchemist significantly increases training throughput by up to 1.72x, reduces up to 47% memory usage during training, and supports up to 2x more training tokens - all while maintaining negligible impact on serving latency.