🤖 AI Summary
This work addresses three key challenges in agent research: limited environmental scalability, static evaluation benchmarks, and misalignment with real-world deployment scenarios. To tackle these, we propose ARE—a scalable agent research platform—and Gaia2—a dynamic, asynchronous benchmark. ARE introduces modular environment abstractions, seamless integration of synthetic and real-world applications, dynamic validators, and asynchronous task execution—enabling robust training and evaluation under noise, multi-agent collaboration, and temporal constraints. Gaia2 pioneers an asynchronous, dynamic, temporally constrained evaluation paradigm, exposing failure modes invisible under static testing, and leverages ARE’s abstract interfaces to support community-driven, continuous benchmark expansion. Experimental results reveal a fundamental trade-off between reasoning capability and inference efficiency in current systems, diminishing returns from budget scaling, and underscore the necessity of adaptive computation and novel architectural designs.
📝 Abstract
We introduce Meta Agents Research Environments (ARE), a research platform for scalable creation of environments, integration of synthetic or real applications, and execution of agentic orchestrations. ARE provides simple abstractions to build complex and diverse environments, each with their own rules, tools, content, and verifiers, helping to bridge the gap between model development and real-world deployment. We also propose Gaia2, a benchmark built in ARE and designed to measure general agent capabilities. Beyond search and execution, Gaia2 requires agents to handle ambiguities and noise, adapt to dynamic environments, collaborate with other agents, and operate under temporal constraints. Unlike prior benchmarks, Gaia2 runs asynchronously, surfacing new failure modes that are invisible in static settings. Our experiments show that no system dominates across the intelligence spectrum: stronger reasoning often comes at the cost of efficiency, and budget scaling curves plateau, highlighting the need for new architectures and adaptive compute strategies. Perhaps more importantly, ARE abstractions enable continuous extension of Gaia2 to other environments, empowering the community to rapidly create new benchmarks tailored to their domains. In AI's second half, progress increasingly depends on defining meaningful tasks and robust evaluations to drive frontier capabilities forward.