🤖 AI Summary
Existing virtual agent benchmarks suffer from uncontrolled task complexity, narrow scenario coverage, single-dimensional evaluation, and heavy reliance on manual annotation. To address these limitations, we propose OmniBench—the first self-generating, cross-platform, graph-structured, multidimensional benchmark. It leverages a subtask knowledge graph and an automated synthesis pipeline to generate controllable-complexity tasks across 20 diverse scenarios (36K tasks total). Our approach introduces a novel graph-structured task synthesis paradigm and the OmniEval framework, enabling joint assessment of subtask-level accuracy, graph-topological validity, and ten core agent capabilities. Synthesized data achieves a 91% human acceptance rate and yields higher training efficiency than manually annotated data. Comprehensive evaluation of over 20 state-of-the-art multimodal large language model (MLLM)-based agents reveals precise capability bottlenecks, facilitating quantifiable, systematic advancement of virtual agent intelligence.
📝 Abstract
As multimodal large language models (MLLMs) advance, MLLM-based virtual agents have demonstrated remarkable performance. However, existing benchmarks face significant limitations, including uncontrollable task complexity, extensive manual annotation with limited scenarios, and a lack of multidimensional evaluation. In response to these challenges, we introduce OmniBench, a self-generating, cross-platform, graph-based benchmark with an automated pipeline for synthesizing tasks of controllable complexity through subtask composition. To evaluate the diverse capabilities of virtual agents on the graph, we further present OmniEval, a multidimensional evaluation framework that includes subtask-level evaluation, graph-based metrics, and comprehensive tests across 10 capabilities. Our synthesized dataset contains 36k graph-structured tasks across 20 scenarios, achieving a 91% human acceptance rate. Training on our graph-structured data shows that it can more efficiently guide agents compared to manually annotated data. We conduct multidimensional evaluations for various open-source and closed-source models, revealing their performance across various capabilities and paving the way for future advancements. Our project is available at https://omni-bench.github.io/.