VL-RouterBench: A Benchmark for Vision-Language Model Routing

📅 2025-12-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing work lacks a systematic, reproducible benchmark for evaluating vision-language model (VLM) routing. Method: We introduce the first log-driven VLM routing benchmark, covering 14 datasets, 17 VLMs, and 30,540 samples, and construct a dual matrix quantifying per-sample quality and inference cost across models. We propose a multi-objective evaluation protocol integrating accuracy, cost, and throughput, normalized via harmonic mean ranking; further, we quantify— for the first time—the gap between router performance and the oracle upper bound (up to 23.6%), exposing bottlenecks in visual fine-grained understanding and textual structural modeling. Contribution/Results: We open-source a complete toolchain—including data generation, scoring, and visualization—and evaluate 10 routing methods on 519,180 sample-model pairs, demonstrating significant performance gains and identifying concrete avenues for improvement.

Technology Category

Application Category

📝 Abstract
Multi-model routing has evolved from an engineering technique into essential infrastructure, yet existing work lacks a systematic, reproducible benchmark for evaluating vision-language models (VLMs). We present VL-RouterBench to assess the overall capability of VLM routing systems systematically. The benchmark is grounded in raw inference and scoring logs from VLMs and constructs quality and cost matrices over sample-model pairs. In scale, VL-RouterBench covers 14 datasets across 3 task groups, totaling 30,540 samples, and includes 15 open-source models and 2 API models, yielding 519,180 sample-model pairs and a total input-output token volume of 34,494,977. The evaluation protocol jointly measures average accuracy, average cost, and throughput, and builds a ranking score from the harmonic mean of normalized cost and accuracy to enable comparison across router configurations and cost budgets. On this benchmark, we evaluate 10 routing methods and baselines and observe a significant routability gain, while the best current routers still show a clear gap to the ideal Oracle, indicating considerable room for improvement in router architecture through finer visual cues and modeling of textual structure. We will open-source the complete data construction and evaluation toolchain to promote comparability, reproducibility, and practical deployment in multimodal routing research.
Problem

Research questions and friction points this paper is trying to address.

Lacks systematic benchmark for vision-language model routing evaluation
Measures VLM routing systems' accuracy, cost, and throughput performance
Identifies gap between current routers and ideal performance for improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark for vision-language model routing evaluation
Constructs quality and cost matrices from sample-model pairs
Uses harmonic mean of normalized cost and accuracy ranking
🔎 Similar Papers
No similar papers found.
Z
Zhehao Huang
Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University
Baijiong Lin
Baijiong Lin
Ph.D. Student, The Hong Kong University of Science and Technology (Guangzhou)
RLVRLLM Post-TrainingMulti-Task Learning
J
Jingyuan Zhang
Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University
Jingying Wang
Jingying Wang
Ph.D. Candidate of CSE, University of Michigan
Surgical TrainingHuman-Computer InteractionAR/VRComputer GraphicsMachine Learning
Yuhang Liu
Yuhang Liu
The University of Adelaide
Representation LearningLLMsLatent Variable ModelsResponsible AI
N
Ning Lu
The Hong Kong University of Science and Technology
T
Tao Li
Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University
Xiaolin Huang
Xiaolin Huang
Professor, Shanghai Jiao Tong University
machine learningkernel methoddeep neural network trainingpiecewise linear model