Graph2Eval: Automatic Multimodal Task Generation for Agents via Knowledge Graphs

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current agent evaluation relies on static datasets, failing to reflect agents’ true capabilities in dynamic, multimodal environments; existing LLM-based task synthesis methods primarily target text- or image-only tasks and lack systematic modeling of multi-step web interactions. To address this, we propose Graph2Eval—the first knowledge graph–based framework for automated agent task generation. It constructs a heterogeneous knowledge graph from diverse data sources, then employs subgraph sampling and meta-path traversal to generate structured, multimodal tasks spanning text, images, and web navigation. Task quality and executability are ensured via template injection, LLM-based scoring, and semantic similarity filtering. We release Graph2Eval-Bench, a benchmark comprising 1,319 high-quality tasks. Experiments demonstrate that Graph2Eval-Bench effectively discriminates agent performance across reasoning, collaboration, and interactive capabilities.

Technology Category

Application Category

📝 Abstract
As multimodal LLM-driven agents continue to advance in autonomy and generalization, evaluation based on static datasets can no longer adequately assess their true capabilities in dynamic environments and diverse tasks. Existing LLM-based synthetic data methods are largely designed for LLM training and evaluation, and thus cannot be directly applied to agent tasks that require tool use and interactive capabilities. While recent studies have explored automatic agent task generation with LLMs, most efforts remain limited to text or image analysis, without systematically modeling multi-step interactions in web environments. To address these challenges, we propose Graph2Eval, a knowledge graph-based framework that automatically generates both multimodal document comprehension tasks and web interaction tasks, enabling comprehensive evaluation of agents' reasoning, collaboration, and interactive capabilities. In our approach, knowledge graphs constructed from multi-source external data serve as the task space, where we translate semantic relations into structured multimodal tasks using subgraph sampling, task templates, and meta-paths. A multi-stage filtering pipeline based on node reachability, LLM scoring, and similarity analysis is applied to guarantee the quality and executability of the generated tasks. Furthermore, Graph2Eval supports end-to-end evaluation of multiple agent types (Single-Agent, Multi-Agent, Web Agent) and measures reasoning, collaboration, and interaction capabilities. We instantiate the framework with Graph2Eval-Bench, a curated dataset of 1,319 tasks spanning document comprehension and web interaction scenarios. Experiments show that Graph2Eval efficiently generates tasks that differentiate agent and model performance, revealing gaps in reasoning, collaboration, and web interaction across different settings and offering a new perspective for agent evaluation.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multimodal agents in dynamic environments beyond static datasets
Generating agent tasks requiring tool use and interactive capabilities
Systematically modeling multi-step web interactions for comprehensive evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates multimodal tasks using knowledge graphs
Applies filtering pipeline for task quality assurance
Supports end-to-end evaluation of multiple agent types
🔎 Similar Papers
No similar papers found.
Yurun Chen
Yurun Chen
Master Student of Science, Tsinghua University
3D vision
X
Xavier Hu
Zhejiang University
Y
Yuhan Liu
Xiamen University
Z
Ziqi Wang
Zhejiang University
Zeyi Liao
Zeyi Liao
The Ohio State University
AINLPMultimodalAgent
L
Lin Chen
Ant Group
Feng Wei
Feng Wei
Assistant Professor, Orthopaedic Biomechanics Laboratories, Michigan State University
Injury BiomechanicsForensic BiomechanicsComputational Biomechanics
Y
Yuxi Qian
Ant Group
B
Bo Zheng
Ant Group
K
Keting Yin
Zhejiang University
S
Shengyu Zhang
Zhejiang University