Robin: a Suite of Multi-Scale Vision-Language Models and the CHIRP Evaluation Benchmark

📅 2025-01-16
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
Current visual-language model (VLM) evaluation lacks rigorous, cross-task and multi-scale benchmarks, hindering accurate characterization of model capabilities and limitations. To address this, we propose Robin—a family of multi-scale VLMs designed to reverse-engineer evaluation insights—and introduce CHIRP, the first dedicated benchmark for long-form generation. CHIRP systematically formalizes three core dimensions: semantic coherence, fine-grained faithfulness, and logical consistency. We further develop an LLM-VE (Large Language Model–Vision Evaluator) collaborative assessment framework and a hybrid annotation methodology. All Robin models, source code, the CHIRP dataset, and evaluation tools are publicly released. Extensive validation across 12 state-of-the-art VLMs demonstrates that CHIRP significantly enhances sensitivity and discriminative power toward deep-seated failures—including hallucination and logical fragmentation—thereby advancing VLM evaluation toward greater robustness, comprehensiveness, and interpretability.

Technology Category

Application Category

📝 Abstract
The proliferation of Vision-Language Models (VLMs) in the past several years calls for rigorous and comprehensive evaluation methods and benchmarks. This work analyzes existing VLM evaluation techniques, including automated metrics, AI-based assessments, and human evaluations across diverse tasks. We first introduce Robin - a novel suite of VLMs that we built by combining Large Language Models (LLMs) and Vision Encoders (VEs) at multiple scales, and use Robin to identify shortcomings of current evaluation approaches across scales. Next, to overcome the identified limitations, we introduce CHIRP - a new long form response benchmark we developed for more robust and complete VLM evaluation. We provide open access to the Robin training code, model suite, and CHIRP benchmark to promote reproducibility and advance VLM research.
Problem

Research questions and friction points this paper is trying to address.

Visual Language Models
Performance Evaluation
Testing Methodology
Innovation

Methods, ideas, or system contributions that make the work stand out.

Robin model
CHIRP evaluation standard
code release for transparency
🔎 Similar Papers
No similar papers found.
A
Alexis Roger
Mila - Quebec AI Institute, Université de Montréal
P
Prateek Humane
Mila - Quebec AI Institute, Université de Montréal
D
Daniel Z Kaplan
realiz.ai
Kshitij Gupta
Kshitij Gupta
Mila - Quebec AI Institute, Université de Montréal
Q
Qi Sun
Tokyo Institute of Technology
G
George Adamopoulos
Mila - Quebec AI Institute, Université de Montréal, McGill University
J
Jonathan Siu Chi Lim
Mila - Quebec AI Institute, Université de Montréal
Quentin Anthony
Quentin Anthony
PhD Student, Ohio State University
HPCDeep LearningParallel Computing
E
Edwin Fennell
University College London
Irina Rish
Irina Rish
University of Montreal / Mila -Quebec AI Institute
Artificial IntelligenceMachine LearningNeuroscience