ARE: Scaling Up Agent Environments and Evaluations

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses three key challenges in agent research: limited environmental scalability, static evaluation benchmarks, and misalignment with real-world deployment scenarios. To tackle these, we propose ARE—a scalable agent research platform—and Gaia2—a dynamic, asynchronous benchmark. ARE introduces modular environment abstractions, seamless integration of synthetic and real-world applications, dynamic validators, and asynchronous task execution—enabling robust training and evaluation under noise, multi-agent collaboration, and temporal constraints. Gaia2 pioneers an asynchronous, dynamic, temporally constrained evaluation paradigm, exposing failure modes invisible under static testing, and leverages ARE’s abstract interfaces to support community-driven, continuous benchmark expansion. Experimental results reveal a fundamental trade-off between reasoning capability and inference efficiency in current systems, diminishing returns from budget scaling, and underscore the necessity of adaptive computation and novel architectural designs.

Technology Category

Application Category

📝 Abstract
We introduce Meta Agents Research Environments (ARE), a research platform for scalable creation of environments, integration of synthetic or real applications, and execution of agentic orchestrations. ARE provides simple abstractions to build complex and diverse environments, each with their own rules, tools, content, and verifiers, helping to bridge the gap between model development and real-world deployment. We also propose Gaia2, a benchmark built in ARE and designed to measure general agent capabilities. Beyond search and execution, Gaia2 requires agents to handle ambiguities and noise, adapt to dynamic environments, collaborate with other agents, and operate under temporal constraints. Unlike prior benchmarks, Gaia2 runs asynchronously, surfacing new failure modes that are invisible in static settings. Our experiments show that no system dominates across the intelligence spectrum: stronger reasoning often comes at the cost of efficiency, and budget scaling curves plateau, highlighting the need for new architectures and adaptive compute strategies. Perhaps more importantly, ARE abstractions enable continuous extension of Gaia2 to other environments, empowering the community to rapidly create new benchmarks tailored to their domains. In AI's second half, progress increasingly depends on defining meaningful tasks and robust evaluations to drive frontier capabilities forward.
Problem

Research questions and friction points this paper is trying to address.

Scaling agent environments and evaluations for real-world deployment
Measuring general agent capabilities beyond search and execution
Addressing limitations of static benchmarks with dynamic testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scalable platform for agent environment creation
Benchmark measuring general capabilities with noise
Asynchronous execution revealing new failure modes
🔎 Similar Papers
No similar papers found.
P
Pierre Andrews
Meta Superintelligence Labs
A
Amine Benhalloum
Meta Superintelligence Labs
G
Gerard Moreno-Torres Bertran
Meta Superintelligence Labs
Matteo Bettini
Matteo Bettini
PhD Candidate, University of Cambridge
RoboticsReinforcement LearningComputer Science
A
Amar Budhiraja
Meta Superintelligence Labs
Ricardo Silveira Cabral
Ricardo Silveira Cabral
Distinguished Research Scientist, NVIDIA
Language ProcessingComputer VisionArtificial Intelligence
Virginie Do
Virginie Do
Meta
machine learningartificial intelligencesocial choice
R
Romain Froger
Meta Superintelligence Labs
E
Emilien Garreau
Meta Superintelligence Labs
J
Jean-Baptiste Gaya
Meta Superintelligence Labs
H
Hugo Laurençon
Meta Superintelligence Labs
M
Maxime Lecanu
Meta Superintelligence Labs
K
Kunal Malkan
Meta Superintelligence Labs
Dheeraj Mekala
Dheeraj Mekala
University of California, San Diego
Natural Language ProcessingData MiningMachine Learning
Pierre Ménard
Pierre Ménard
OvGU Magdeburg
Grégoire Mialon
Grégoire Mialon
Meta AI
Machine learning
Ulyana Piterbarg
Ulyana Piterbarg
NYU Courant Institute of Mathematical Sciences
reinforcement learningnatural language processingopen-endedness
M
Mikhail Plekhanov
Meta Superintelligence Labs
Mathieu Rita
Mathieu Rita
Research Scientist, Meta (ex: PhD, INRIA)
Large Language ModelsReinforcement LearningComputational Linguistics
A
Andrey Rusakov
Meta Superintelligence Labs
Thomas Scialom
Thomas Scialom
FAIR - Meta AI
AGIAgentsReinforcement LearningRLHF
V
Vladislav Vorotilov
Meta Superintelligence Labs
M
Mengjue Wang
Meta Superintelligence Labs
I
Ian Yu
Meta Superintelligence Labs