ML-Tool-Bench: Tool-Augmented Planning for ML Tasks

πŸ“… 2025-11-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing tool-use benchmarks focus on single-step tool selection or parameter extraction, failing to assess the multi-step planning capabilities essential for ML agents. Method: We introduce the first tool-augmented planning benchmark specifically designed for machine learning tasks, comprising 61 domain-specific tools and 15 Kaggle tabular data challenges, with in-memory object management to support state-aware workflow execution. Our approach innovatively integrates structured feedback-driven shaping rewards and explicit subtask decomposition to overcome ReAct’s limitations in complex ML pipelines. Technically, it unifies LLM-guided tree search planning, tool-call generation, memory management, and stepwise reward modeling. Contribution/Results: Evaluated on GPT-4o, our method achieves a 16.52-percentage-point improvement in median task success rate over ReAct, significantly enhancing end-to-end real-world ML task solving by autonomous agents.

Technology Category

Application Category

πŸ“ Abstract
The development of autonomous machine learning (ML) agents capable of end-to-end data science workflows represents a significant frontier in artificial intelligence. These agents must orchestrate complex sequences of data analysis, feature engineering, model selection, and hyperparameter optimization, tasks that require sophisticated planning and iteration. While recent work on building ML agents has explored using large language models (LLMs) for direct code generation, tool-augmented approaches offer greater modularity and reliability. However, existing tool-use benchmarks focus primarily on task-specific tool selection or argument extraction for tool invocation, failing to evaluate the sophisticated planning capabilities required for ML Agents. In this work, we introduce a comprehensive benchmark for evaluating tool-augmented ML agents using a curated set of 61 specialized tools and 15 tabular ML challenges from Kaggle. Our benchmark goes beyond traditional tool-use evaluation by incorporating an in-memory named object management, allowing agents to flexibly name, save, and retrieve intermediate results throughout the workflows. We demonstrate that standard ReAct-style approaches struggle to generate valid tool sequences for complex ML pipelines, and that tree search methods with LLM-based evaluation underperform due to inconsistent state scoring. To address these limitations, we propose two simple approaches: 1) using shaped deterministic rewards with structured textual feedback, and 2) decomposing the original problem into a sequence of sub-tasks, which significantly improves trajectory validity and task performance. Using GPT-4o, our approach improves over ReAct by 16.52 percentile positions, taking the median across all Kaggle challenges. We believe our work provides a foundation for developing more capable tool-augmented planning ML agents.
Problem

Research questions and friction points this paper is trying to address.

Evaluates tool-augmented ML agents' planning for data science workflows
Addresses limitations in existing benchmarks for ML task orchestration
Improves trajectory validity in complex ML pipeline planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tool-augmented planning benchmark with 61 specialized tools
In-memory named object management for intermediate results
Shaped deterministic rewards and sub-task decomposition for planning
πŸ”Ž Similar Papers