🤖 AI Summary
This work addresses the inefficiencies of existing inference systems in multi-model pipelines, where passive request dropping leads to low throughput, severe resource wastage, and difficulty meeting stringent latency constraints. To overcome these limitations, the authors propose a runtime-aware proactive dropping mechanism that dynamically evaluates request priority based on remaining latency budget and system load, adaptively discarding the lowest-priority requests before they consume excessive resources. This approach replaces conventional passive strategies with more timely and precise dropping decisions. Experimental results on a 64-GPU cluster demonstrate that the proposed method improves effective system throughput by 16%–176%, reduces the request drop rate by 1.6–17×, and decreases computational resource waste by 1.5–62× compared to state-of-the-art techniques.
📝 Abstract
Modern deep neural network (DNN) applications integrate multiple DNN models into inference pipelines with stringent latency requirements for customized tasks. To mitigate extensive request timeouts caused by accumulation, systems for inference pipelines commonly drop a subset of requests so the remaining ones can satisfy latency constraints. Since it is commonly believed that request dropping adversely affects goodput, existing systems only drop requests when they have to, which we call reactive dropping. However, this reactive policy can not maintain high goodput, as it neither makes timely dropping decisions nor identifies the proper set of requests to drop, leading to issues of dropping requests too late or dropping the wrong set of requests. We propose that the inference system should proactively drop certain requests in advance to enhance the goodput across the entire workload. To achieve this, we design an inference system PARD. It enhances goodput with timely and precise dropping decisions by integrating a proactive dropping method that decides when to drop requests using runtime information of the inference pipeline, and an adaptive request priority mechanism that selects which specific requests to drop based on remaining latency budgets and workload intensity. Evaluation on a cluster of 64 GPUs over real-world workloads shows that PARD achieves $16\%$-$176\%$ higher goodput than the state of the art while reducing the drop rate and wasted computation resources by $1.6\times$-$17\times$ and $1.5\times$-$62\times$ respectively.