๐ค AI Summary
Existing instruction datasets lack tool interaction, and agent benchmarks rely on costly manual annotation, severely limiting scalability. Method: We propose the first fully automated task generation framework targeting the scarcity of multi-step decision-making, tool-calling, and adaptive reasoning tasks. Our approach introduces a novel depth-width expansionโbased atomic task evolution mechanism for hierarchically controlled complexity generation. We further construct the first large-scale (36K instances), synthetically generated benchmark dataset featuring verifiable execution traces, achieved through task graph modeling, tool-interaction trace synthesis, structured expansion, and verifiability constraint injection. Contribution/Results: The framework significantly enhances prompt optimization and supervised fine-tuning of foundation models. Empirical evaluation demonstrates state-of-the-art performance across multiple agent-oriented assessment metrics.
๐ Abstract
Agentic tasks, which require multi-step problem solving with autonomy, tool use, and adaptive reasoning, are becoming increasingly central to the advancement of NLP and AI. However, existing instruction data lacks tool interaction, and current agentic benchmarks rely on costly human annotation, limiting their scalability. We introduce extsc{TaskCraft}, an automated workflow for generating difficulty-scalable, multi-tool, and verifiable agentic tasks with execution trajectories. TaskCraft expands atomic tasks using depth-based and width-based extensions to create structurally and hierarchically complex challenges. Empirical results show that these tasks improve prompt optimization in the generation workflow and enhance supervised fine-tuning of agentic foundation models. We present a large-scale synthetic dataset of approximately 36,000 tasks with varying difficulty to support future research on agent tuning and evaluation.