🤖 AI Summary
To address the challenge of balancing energy efficiency and inference accuracy for large-scale deep neural network (DNN) models in cloud-based AI services, this paper proposes a hybrid cascaded serving architecture. Our approach features a confidence-driven, multi-scale model collaboration mechanism that dynamically routes inference requests to the smallest feasible model satisfying a user-specified accuracy threshold. It further integrates heterogeneous model parallelism, fine-grained dataflow orchestration, and dynamic resource scheduling to jointly optimize model partitioning and replica placement. Crucially, the method preserves end-to-end inference accuracy of giant models without degradation while substantially reducing GPU resource dependency. Experimental evaluation demonstrates up to 19.8× improvement in energy efficiency over state-of-the-art (SOTA) baselines. This work delivers a scalable, system-level solution for high-accuracy, low-carbon AI inference in large-scale cloud environments.
📝 Abstract
Giant Deep Neural Networks (DNNs), have become indispensable for accurate and robust support of large-scale cloud based AI services. However, serving giant DNNs is prohibitively expensive from an energy consumption viewpoint easily exceeding that of training, due to the enormous scale of GPU clusters needed to hold giant DNN model partitions and replicas. Existing approaches can either optimize energy efficiency or inference accuracy but not both. To overcome this status quo, we propose HybridServe, a novel hybrid DNN model serving system that leverages multiple sized versions (small to giant) of the model to be served in tandem. Through a confidence based hybrid model serving dataflow, HybridServe prefers to serve inference requests with energy-efficient smaller models so long as accuracy is not compromised, thereby reducing the number of replicas needed for giant DNNs. HybridServe also features a dataflow planner for efficient partitioning and replication of candidate models to maximize serving system throughput. Experimental results using a prototype implementation of HybridServe show that it reduces energy footprint by up to 19.8x compared to the state-of-the-art DNN model serving systems while matching the accuracy of serving solely with giant DNNs.