🤖 AI Summary
This work addresses the limited ability of multimodal large language models (MLLMs) to proactively request user assistance in perception-constrained scenarios—such as occluded, low-quality, or abstract sketch inputs. To this end, we present ProactiveBench, the first systematically defined benchmark for evaluating proactive behavior in MLLMs, integrating seven existing datasets into a unified evaluation framework. Our comprehensive assessment of 22 prominent models reveals that model scale shows no significant correlation with proactivity, and that in-context learning may introduce detrimental biases. We further propose a reinforcement learning–based fine-tuning approach that substantially enhances both proactivity and generalization across seen and unseen tasks. The benchmark and associated code are publicly released to support future research.
📝 Abstract
Effective collaboration begins with knowing when to ask for help. For example, when trying to identify an occluded object, a human would ask someone to remove the obstruction. Can MLLMs exhibit a similar "proactive" behavior by requesting simple user interventions? To investigate this, we introduce ProactiveBench, a benchmark built from seven repurposed datasets that tests proactiveness across different tasks such as recognizing occluded objects, enhancing image quality, and interpreting coarse sketches. We evaluate 22 MLLMs on ProactiveBench, showing that (i) they generally lack proactiveness; (ii) proactiveness does not correlate with model capacity; (iii) "hinting" at proactiveness yields only marginal gains. Surprisingly, we found that conversation histories and in-context learning introduce negative biases, hindering performance. Finally, we explore a simple fine-tuning strategy based on reinforcement learning: its results suggest that proactiveness can be learned, even generalizing to unseen scenarios. We publicly release ProactiveBench as a first step toward building proactive multimodal models.