🤖 AI Summary
To address the challenges of accelerator memory constraints, rapidly increasing model counts, and high service latency in enterprise self-hosted CodeLLMs, this paper proposes a context-aware dynamic model scheduling method. Our approach introduces a fine-grained model priority evaluation model that jointly incorporates multidimensional runtime contexts—including model loading overhead, task latency sensitivity, expected output length, and sliding-window usage frequency—for real-time prioritization. It further integrates task-type classification and resource-sensitivity modeling to enable adaptive eviction decisions. Evaluated on realistic AI programming workloads, our method achieves an average reduction of 38.2% in first-token latency and 29.7% in end-to-end latency, decreases model eviction frequency by 61.4%, and improves GPU memory utilization and system response consistency.
📝 Abstract
AI-assisted coding tools powered by Code Large Language Models (CodeLLMs) are increasingly integrated into modern software development workflows. To address concerns around privacy, latency, and model customization, many enterprises opt to self-host these models. However, the diversity and growing number of CodeLLMs, coupled with limited accelerator memory, introduce practical challenges in model management and serving efficiency. This paper presents CACE, a novel context-aware model eviction strategy designed specifically to optimize self-hosted CodeLLM serving under resource constraints. Unlike traditional eviction strategies based solely on recency (e.g., Least Recently Used), CACE leverages multiple context-aware factors, including model load time, task-specific latency sensitivity, expected output length, and recent usage and future demand tracked through a sliding window. We evaluate CACE using realistic workloads that include both latency-sensitive code completion and throughput-intensive code reasoning tasks. Our experiments show that CACE reduces Time-to-First-Token (TTFT) and end-to-end (E2E) latency, while significantly lowering the number of model evictions compared to state-of-the-art systems. Ablation studies further demonstrate the importance of multi-factor eviction in balancing responsiveness and resource efficiency. This work contributes practical strategies for deploying scalable, low-latency AI coding assistants in real-world software engineering environments.