VisionArena: 230K Real World User-VLM Conversations with Preference Labels

📅 2024-12-11
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks fail to capture authentic user interactions with vision-language models (VLMs), hindering rigorous evaluation and optimization of VLMs on open-ended tasks (e.g., image-text generation, humor understanding) and complex reasoning tasks (e.g., spatial reasoning, planning). Method: We introduce WildBench—the first large-scale, real-world VLM interaction dataset—comprising 230K dialogues from 73K users, spanning 45 VLMs and 138 languages. It features three complementary subsets: chat logs, fine-grained preference-labeled comparisons, and an automated evaluation suite. Crucially, it systematically collects real user preferences with granular annotations and proposes WildVision, a lightweight automatic evaluation benchmark to replace costly human assessment. Contribution/Results: WildBench reveals systematic VLM deficiencies in style dependency and structured reasoning. Supervised fine-tuning on WildBench improves model performance by +17 points on MMMU and +46 points on WildVision, significantly enhancing alignment and cross-task generalization.

Technology Category

Application Category

📝 Abstract
With the growing adoption and capabilities of vision-language models (VLMs) comes the need for benchmarks that capture authentic user-VLM interactions. In response, we create VisionArena, a dataset of 230K real-world conversations between users and VLMs. Collected from Chatbot Arena - an open-source platform where users interact with VLMs and submit preference votes - VisionArena spans 73K unique users, 45 VLMs, and 138 languages. Our dataset contains three subsets: VisionArena-Chat, 200k single and multi-turn conversations between a user and a VLM; VisionArena-Battle, 30K conversations comparing two anonymous VLMs with user preference votes; and VisionArena-Bench, an automatic benchmark of 500 diverse user prompts that efficiently approximate the live Chatbot Arena model rankings. Additionally, we highlight the types of question asked by users, the influence of response style on preference, and areas where models often fail. We find open-ended tasks like captioning and humor are highly style-dependent, and current VLMs struggle with spatial reasoning and planning tasks. Lastly, we show finetuning the same base model on VisionArena-Chat outperforms Llava-Instruct-158K, with a 17-point gain on MMMU and a 46-point gain on the WildVision benchmark. Dataset at https://huggingface.co/lmarena-ai
Problem

Research questions and friction points this paper is trying to address.

Captures real-world user-VLM interactions for benchmarking
Evaluates VLM performance across diverse tasks and languages
Identifies VLM weaknesses in spatial reasoning and planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

230K real-world user-VLM conversations dataset
Preference labels from Chatbot Arena platform
Finetuning boosts performance on benchmarks
🔎 Similar Papers
No similar papers found.