🤖 AI Summary
This work addresses the challenges of automated AI research, which is hindered by high computational costs and ambiguous performance attribution, limiting the effectiveness of existing large language model agents. To overcome these limitations, we propose MARS, a novel framework that integrates budget-aware Monte Carlo Tree Search (MCTS), a modular design–decompose–implement pipeline, and a cross-branch contrastive reflective memory mechanism. This combination enables efficient and interpretable autonomous research. Evaluated on MLE-Bench, MARS achieves state-of-the-art performance among open-source frameworks, with 63% of its effective experiences derived from cross-branch knowledge transfer, demonstrating insight-like generalization and decision-making capabilities.
📝 Abstract
Automating AI research differs from general software engineering due to computationally expensive evaluation (e.g., model training) and opaque performance attribution. Current LLM-based agents struggle here, often generating monolithic scripts that ignore execution costs and causal factors. We introduce MARS (Modular Agent with Reflective Search), a framework optimized for autonomous AI research. MARS relies on three pillars: (1) Budget-Aware Planning via cost-constrained Monte Carlo Tree Search (MCTS) to explicitly balance performance with execution expense; (2) Modular Construction, employing a"Design-Decompose-Implement"pipeline to manage complex research repositories; and (3) Comparative Reflective Memory, which addresses credit assignment by analyzing solution differences to distill high-signal insights. MARS achieves state-of-the-art performance among open-source frameworks on MLE-Bench under comparable settings, maintaining competitiveness with the global leaderboard's top methods. Furthermore, the system exhibits qualitative"Aha!"moments, where 63% of all utilized lessons originate from cross-branch transfer, demonstrating that the agent effectively generalizes insights across search paths.