DixitWorld: Evaluating Multimodal Abductive Reasoning in Vision-Language Models with Multi-Agent Dixit Gameplay

📅 2025-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work evaluates multimodal abductive reasoning in vision-language models (VLMs) within dynamic multi-agent environments, specifically probing the trade-off between generative creativity and discriminative understanding. To this end, we propose DixitWorld—a novel evaluation framework comprising DixitArena, a dynamic interactive environment for assessing hypothesis generation, and DixitBench, a static benchmark for evaluating hypothesis selection. DixitWorld is the first to introduce incomplete-information multi-agent games into multimodal VLM evaluation. It integrates dynamic prompt generation, image-text matching, and question-answering tasks to enable decomposable assessment. Experimental results show that smaller open-source VLMs outperform larger models in creative clue generation, whereas larger models excel at discriminative tasks. Crucially, DixitBench and DixitArena exhibit strong result correlation, validating the framework’s effectiveness and revealing a previously unobserved phenomenon: role-dependent performance disparities across agents.

Technology Category

Application Category

📝 Abstract
Multimodal abductive reasoning--the generation and selection of explanatory hypotheses from partial observations--is a cornerstone of intelligence. Current evaluations of this ability in vision-language models (VLMs) are largely confined to static, single-agent tasks. Inspired by Dixit, we introduce DixitWorld, a comprehensive evaluation suite designed to deconstruct this challenge. DIXITWORLD features two core components: DixitArena, a dynamic, multi-agent environment that evaluates both hypothesis generation (a "storyteller" crafting cryptic clues) and hypothesis selection ("listeners" choosing the target image from decoys) under imperfect information; and DixitBench, a static QA benchmark that isolates the listener's task for efficient, controlled evaluation. Results from DixitArena reveal distinct, role-dependent behaviors: smaller open-source models often excel as creative storytellers, producing imaginative yet less discriminative clues, whereas larger proprietary models demonstrate superior overall performance, particularly as listeners. Performance on DixitBench strongly correlates with listener results in DixitArena, validating it as a reliable proxy for hypothesis selection. Our findings reveal a key trade-off between generative creativity and discriminative understanding in multimodal abductive reasoning, a central challenge for developing more balanced and capable vision-language agents.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multimodal abductive reasoning in vision-language models
Assessing hypothesis generation and selection using multi-agent gameplay
Analyzing trade-offs between generative creativity and discriminative understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent Dixit gameplay evaluates multimodal reasoning
Dynamic environment tests hypothesis generation and selection
Static benchmark isolates listener task for evaluation
🔎 Similar Papers
No similar papers found.