Nova: Real-Time Agentic Vision-Language Model Serving with Adaptive Cross-Stage Parallelization

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-time inference scheduling for agent-oriented vision-language models (VLMs) on a single GPU requires joint optimization of request latency and system throughput. Method: We propose the first real-time scheduling framework tailored for agent VLMs, featuring: (1) an adaptive cross-stage parallelism mechanism enabling dynamic pipelining of heterogeneous multi-stage tasks; (2) an online Pareto-optimal resource allocation strategy; and (3) lightweight visual encoder weight offloading with elastic GPU memory partitioning. Contribution/Results: Evaluated under both real-world and synthetic workloads, our framework reduces end-to-end latency by up to 23.3% over state-of-the-art baselines while sustaining high throughput—significantly improving the throughput–latency Pareto frontier in latency-sensitive scenarios.

Technology Category

Application Category

📝 Abstract
This paper presents Nova, a real-time scheduling framework for serving agentic vision-language models (VLMs) on a single GPU with balanced per-request latency and overall request process throughput. Our design begins by enabling effective pipelining across vision encode, LLM prefill, and LLM decode stages of VLMs, by exploiting their heterogeneous resource demands during execution and incorporating elastic GPU spatial partitioning among stages to maximally utilize the compute and memory resources. Building on this, we introduce a real-time scheduling algorithm that adaptively calibrates resource allocation among stages based on a Pareto-optimal analysis of the latency-throughput trade-off, allowing the system to sustain responsiveness and resource efficiency under dynamic request loads. To further alleviate GPU memory pressure, we design a lightweight weight offloading strategy for vision encoders that preserves inference efficiency with minimized memory overhead. Extensive evaluations on both synthetic and real-world agent workloads demonstrate that Nova consistently outperforms the state-of-the-art baselines, improving the maximum latency by up to 23.3%, while keeping competitive throughput.
Problem

Research questions and friction points this paper is trying to address.

Optimizes real-time serving of agentic vision-language models on single GPUs
Balances latency and throughput via adaptive cross-stage parallelization scheduling
Addresses GPU memory pressure with lightweight weight offloading for encoders
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enables pipelining across vision encode, LLM prefill, and decode stages
Adaptively calibrates resource allocation using Pareto-optimal analysis
Implements lightweight weight offloading for vision encoder memory efficiency
🔎 Similar Papers
No similar papers found.
Y
Yuhang Xu
Shanghai Jiao Tong University
Shengzhong Liu
Shengzhong Liu
Shanghai Jiao Tong University
D
Dong Zhang
Inspur Data Co.,Ltd.
B
Bingheng Yan
Inspur Data Co.,Ltd.
F
Fan Wu
Shanghai Jiao Tong University
Guihai Chen
Guihai Chen
Professor of Computer Science
Computer Science and Technology