đ€ AI Summary
Current visual-language model (VLM) evaluation lacks rigorous, cross-task and multi-scale benchmarks, hindering accurate characterization of model capabilities and limitations. To address this, we propose Robinâa family of multi-scale VLMs designed to reverse-engineer evaluation insightsâand introduce CHIRP, the first dedicated benchmark for long-form generation. CHIRP systematically formalizes three core dimensions: semantic coherence, fine-grained faithfulness, and logical consistency. We further develop an LLM-VE (Large Language ModelâVision Evaluator) collaborative assessment framework and a hybrid annotation methodology. All Robin models, source code, the CHIRP dataset, and evaluation tools are publicly released. Extensive validation across 12 state-of-the-art VLMs demonstrates that CHIRP significantly enhances sensitivity and discriminative power toward deep-seated failuresâincluding hallucination and logical fragmentationâthereby advancing VLM evaluation toward greater robustness, comprehensiveness, and interpretability.
đ Abstract
The proliferation of Vision-Language Models (VLMs) in the past several years calls for rigorous and comprehensive evaluation methods and benchmarks. This work analyzes existing VLM evaluation techniques, including automated metrics, AI-based assessments, and human evaluations across diverse tasks. We first introduce Robin - a novel suite of VLMs that we built by combining Large Language Models (LLMs) and Vision Encoders (VEs) at multiple scales, and use Robin to identify shortcomings of current evaluation approaches across scales. Next, to overcome the identified limitations, we introduce CHIRP - a new long form response benchmark we developed for more robust and complete VLM evaluation. We provide open access to the Robin training code, model suite, and CHIRP benchmark to promote reproducibility and advance VLM research.