Gaia2: Benchmarking LLM Agents on Dynamic and Asynchronous Environments

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current evaluations of large language model (LLM) agents are largely confined to static or synchronous environments, failing to capture real-world challenges such as dynamic environmental evolution and asynchronous event occurrence. This work proposes Gaia2, a novel benchmark that enables the first systematic evaluation of LLM agents in dynamic, asynchronous scenarios within consumer-grade settings. Built upon the open-source Agents Research Environments (ARE) platform, Gaia2 introduces a write-action validator, asynchronous event simulation, and a multi-agent interaction framework, supporting fine-grained reinforcement learning training. Experimental results show that GPT-5 (high) achieves the highest pass@1 rate at 42% but underperforms on time-sensitive tasks, while Kimi-K2 emerges as the best open-source model with a 21% pass@1 rate, revealing fundamental trade-offs among reasoning capability, efficiency, and robustness.

Technology Category

Application Category

📝 Abstract
We introduce Gaia2, a benchmark for evaluating large language model agents in realistic, asynchronous environments. Unlike prior static or synchronous evaluations, Gaia2 introduces scenarios where environments evolve independently of agent actions, requiring agents to operate under temporal constraints, adapt to noisy and dynamic events, resolve ambiguity, and collaborate with other agents. Each scenario is paired with a write-action verifier, enabling fine-grained, action-level evaluation and making Gaia2 directly usable for reinforcement learning from verifiable rewards. Our evaluation of state-of-the-art proprietary and open-source models shows that no model dominates across capabilities: GPT-5 (high) reaches the strongest overall score of 42% pass@1 but fails on time-sensitive tasks, Claude-4 Sonnet trades accuracy and speed for cost, Kimi-K2 leads among open-source models with 21% pass@1. These results highlight fundamental trade-offs between reasoning, efficiency, robustness, and expose challenges in closing the"sim2real"gap. Gaia2 is built on a consumer environment with the open-source Agents Research Environments platform and designed to be easy to extend. By releasing Gaia2 alongside the foundational ARE framework, we aim to provide the community with a flexible infrastructure for developing, benchmarking, and training the next generation of practical agent systems.
Problem

Research questions and friction points this paper is trying to address.

LLM agents
dynamic environments
asynchronous environments
benchmarking
sim2real gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

asynchronous environments
dynamic benchmarking
write-action verifier
reinforcement learning from verifiable rewards
sim2real gap
🔎 Similar Papers
No similar papers found.
R
Romain Froger
Meta SuperIntelligence Labs
P
Pierre Andrews
Meta SuperIntelligence Labs
Matteo Bettini
Matteo Bettini
PhD Candidate, University of Cambridge
RoboticsReinforcement LearningComputer Science
A
Amar Budhiraja
Meta SuperIntelligence Labs
Ricardo Silveira Cabral
Ricardo Silveira Cabral
Distinguished Research Scientist, NVIDIA
Language ProcessingComputer VisionArtificial Intelligence
Virginie Do
Virginie Do
Meta
machine learningartificial intelligencesocial choice
E
Emilien Garreau
Meta SuperIntelligence Labs
J
Jean-Baptiste Gaya
Meta SuperIntelligence Labs
H
Hugo Laurençon
Meta SuperIntelligence Labs
M
Maxime Lecanu
Meta SuperIntelligence Labs
K
Kunal Malkan
Meta SuperIntelligence Labs
Dheeraj Mekala
Dheeraj Mekala
University of California, San Diego
Natural Language ProcessingData MiningMachine Learning
Pierre Ménard
Pierre Ménard
OvGU Magdeburg
G
Gerard Moreno-Torres Bertran
Meta SuperIntelligence Labs
Ulyana Piterbarg
Ulyana Piterbarg
NYU Courant Institute of Mathematical Sciences
reinforcement learningnatural language processingopen-endedness
M
Mikhail Plekhanov
Meta SuperIntelligence Labs
Mathieu Rita
Mathieu Rita
Research Scientist, Meta (ex: PhD, INRIA)
Large Language ModelsReinforcement LearningComputational Linguistics
A
Andrey Rusakov
Meta SuperIntelligence Labs
V
Vladislav Vorotilov
Meta SuperIntelligence Labs
M
Mengjue Wang
Meta SuperIntelligence Labs
I
Ian Yu
Meta SuperIntelligence Labs
A
Amine Benhalloum
Meta SuperIntelligence Labs
Grégoire Mialon
Grégoire Mialon
Meta AI
Machine learning
Thomas Scialom
Thomas Scialom
FAIR - Meta AI
AGIAgentsReinforcement LearningRLHF