🤖 AI Summary
Real-time inference scheduling for agent-oriented vision-language models (VLMs) on a single GPU requires joint optimization of request latency and system throughput. Method: We propose the first real-time scheduling framework tailored for agent VLMs, featuring: (1) an adaptive cross-stage parallelism mechanism enabling dynamic pipelining of heterogeneous multi-stage tasks; (2) an online Pareto-optimal resource allocation strategy; and (3) lightweight visual encoder weight offloading with elastic GPU memory partitioning. Contribution/Results: Evaluated under both real-world and synthetic workloads, our framework reduces end-to-end latency by up to 23.3% over state-of-the-art baselines while sustaining high throughput—significantly improving the throughput–latency Pareto frontier in latency-sensitive scenarios.
📝 Abstract
This paper presents Nova, a real-time scheduling framework for serving agentic vision-language models (VLMs) on a single GPU with balanced per-request latency and overall request process throughput. Our design begins by enabling effective pipelining across vision encode, LLM prefill, and LLM decode stages of VLMs, by exploiting their heterogeneous resource demands during execution and incorporating elastic GPU spatial partitioning among stages to maximally utilize the compute and memory resources. Building on this, we introduce a real-time scheduling algorithm that adaptively calibrates resource allocation among stages based on a Pareto-optimal analysis of the latency-throughput trade-off, allowing the system to sustain responsiveness and resource efficiency under dynamic request loads. To further alleviate GPU memory pressure, we design a lightweight weight offloading strategy for vision encoders that preserves inference efficiency with minimized memory overhead. Extensive evaluations on both synthetic and real-world agent workloads demonstrate that Nova consistently outperforms the state-of-the-art baselines, improving the maximum latency by up to 23.3%, while keeping competitive throughput.