ML-Agent: Reinforcing LLM Agents for Autonomous Machine Learning Engineering

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM-based agents for autonomous machine learning engineering heavily rely on manual prompt engineering and lack adaptive optimization grounded in empirical experimentation. Method: This paper introduces the first learning-driven, agent-centric ML paradigm, enabling online-evolving LLM agents through the synergistic integration of exploration-augmented instruction tuning, step-level reinforcement learning, and ML-specific multi-source feedback reward modeling—supporting end-to-end autonomous experimentation, reflection, and policy refinement. Contribution/Results: Implemented on a lightweight Qwen-2.5-7B architecture, the agent achieves state-of-the-art performance after training on only nine tasks—outperforming the 671B-parameter DeepSeek-R1. It demonstrates superior cross-task generalization and sustained performance improvement, establishing a new benchmark for self-improving, experiment-driven ML agents.

Technology Category

Application Category

📝 Abstract
The emergence of large language model (LLM)-based agents has significantly advanced the development of autonomous machine learning (ML) engineering. However, most existing approaches rely heavily on manual prompt engineering, failing to adapt and optimize based on diverse experimental experiences. Focusing on this, for the first time, we explore the paradigm of learning-based agentic ML, where an LLM agent learns through interactive experimentation on ML tasks using online reinforcement learning (RL). To realize this, we propose a novel agentic ML training framework with three key components: (1) exploration-enriched fine-tuning, which enables LLM agents to generate diverse actions for enhanced RL exploration; (2) step-wise RL, which enables training on a single action step, accelerating experience collection and improving training efficiency; (3) an agentic ML-specific reward module, which unifies varied ML feedback signals into consistent rewards for RL optimization. Leveraging this framework, we train ML-Agent, driven by a 7B-sized Qwen-2.5 LLM for autonomous ML. Remarkably, despite being trained on merely 9 ML tasks, our 7B-sized ML-Agent outperforms the 671B-sized DeepSeek-R1 agent. Furthermore, it achieves continuous performance improvements and demonstrates exceptional cross-task generalization capabilities.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM agents for autonomous ML via reinforcement learning
Reducing reliance on manual prompt engineering in ML tasks
Improving agent performance and generalization with limited training tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exploration-enriched fine-tuning for diverse actions
Step-wise RL for efficient training
Agentic ML-specific reward module unification
🔎 Similar Papers
No similar papers found.