🤖 AI Summary
Existing evaluations of large language model (LLM) agents lack standardized, end-to-end assessment of their capabilities in autonomous AI research. Method: We introduce MLGym, the first Gym-style framework for AI research, accompanied by MLGym-Bench—a benchmark comprising 13 cross-domain, open-ended research tasks requiring agents to perform hypothesis generation, model implementation, iterative experimentation, and result analysis. Contribution/Results: MLGym is the first standardized environment supporting RL-based training for AI research; it systematically defines and empirically evaluates LLMs’ capabilities and limitations in autonomous scientific discovery. It supports integration of state-of-the-art multimodal models (e.g., GPT-4o, Claude-3.5, Llama-3.1-405B) and extensible algorithmic plugins. Experiments reveal that even top-tier models succeed only in localized tasks—such as hyperparameter optimization—but fail to generate novel algorithms or architectures. The framework is open-sourced, establishing a foundational infrastructure for evaluating and advancing AI-driven scientific autonomy.
📝 Abstract
We introduce Meta MLGym and MLGym-Bench, a new framework and benchmark for evaluating and developing LLM agents on AI research tasks. This is the first Gym environment for machine learning (ML) tasks, enabling research on reinforcement learning (RL) algorithms for training such agents. MLGym-bench consists of 13 diverse and open-ended AI research tasks from diverse domains such as computer vision, natural language processing, reinforcement learning, and game theory. Solving these tasks requires real-world AI research skills such as generating new ideas and hypotheses, creating and processing data, implementing ML methods, training models, running experiments, analyzing the results, and iterating through this process to improve on a given task. We evaluate a number of frontier large language models (LLMs) on our benchmarks such as Claude-3.5-Sonnet, Llama-3.1 405B, GPT-4o, o1-preview, and Gemini-1.5 Pro. Our MLGym framework makes it easy to add new tasks, integrate and evaluate models or agents, generate synthetic data at scale, as well as develop new learning algorithms for training agents on AI research tasks. We find that current frontier models can improve on the given baselines, usually by finding better hyperparameters, but do not generate novel hypotheses, algorithms, architectures, or substantial improvements. We open-source our framework and benchmark to facilitate future research in advancing the AI research capabilities of LLM agents.