AlignMMBench: Evaluating Chinese Multimodal Alignment in Large Vision-Language Models

📅 2024-06-13
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
The absence of fine-grained alignment evaluation benchmarks hinders systematic assessment of Chinese vision-language models (VLMs). Method: We introduce AlignMMBench—the first Chinese multimodal alignment benchmark—covering 13 real-world task categories and single-/multi-turn dialogues, with 1,054 images and 4,978 high-quality human-annotated question-answer pairs. We conduct the first systematic evaluation of semantic alignment in Chinese VLMs; propose a prompt rewriting strategy to enhance evaluation robustness; and design CritiqueVLM, a rule-calibrated automatic evaluator outperforming GPT-4. Contribution/Results: Comprehensive evaluation of mainstream Chinese VLMs reveals critical bottlenecks in fine-grained visual understanding, cross-modal consistency, and dialogue coherence. All code and data are publicly released.

Technology Category

Application Category

📝 Abstract
Evaluating the alignment capabilities of large Vision-Language Models (VLMs) is essential for determining their effectiveness as helpful assistants. However, existing benchmarks primarily focus on basic abilities using nonverbal methods, such as yes-no and multiple-choice questions. In this paper, we address this gap by introducing AlignMMBench, a comprehensive alignment benchmark specifically designed for emerging Chinese VLMs. This benchmark is meticulously curated from real-world scenarios and Chinese Internet sources, encompassing thirteen specific tasks across three categories, and includes both single-turn and multi-turn dialogue scenarios. Incorporating a prompt rewrite strategy, AlignMMBench encompasses 1,054 images and 4,978 question-answer pairs. To facilitate the evaluation pipeline, we propose CritiqueVLM, a rule-calibrated evaluator that exceeds GPT-4's evaluation ability. Finally, we report the performance of representative VLMs on AlignMMBench, offering insights into the capabilities and limitations of different VLM architectures. All evaluation codes and data are available on https://alignmmbench.github.io.
Problem

Research questions and friction points this paper is trying to address.

Evaluates Chinese VLMs' alignment using nuanced real-world scenarios
Introduces AlignMMBench with 1,054 images and 4,978 QA pairs
Measures alignment score for model robustness across prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introducing AlignMMBench for nuanced Chinese VLM evaluations
Developing CritiqueVLM to surpass GPT-4 evaluation ability
Measuring alignment score for model robustness assessment
🔎 Similar Papers
No similar papers found.