Towards a Science of Scaling Agent Systems

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of interpretable scalability laws in language model agent systems by proposing the first quantitative framework for studying agent system scalability. We conduct systematic experiments across four benchmarks, evaluating five agent architectures—single-agent, independent, centralized, distributed, and hybrid—paired with three large language models. Leveraging task attributes, we develop a performance prediction model. Our study uncovers three fundamental scalability principles: (i) a coordination–redundancy trade-off in tool invocation, (ii) capability saturation, and (iii) topology-dependent error amplification. The model achieves optimal strategy prediction for 87% of configurations (R² = 0.513). Empirically, centralized architectures improve parallel financial reasoning by 80.9%, distributed architectures outperform others by 9.2% on dynamic web navigation, while multi-agent systems consistently degrade by 39–70% on sequential reasoning tasks.

Technology Category

Application Category

📝 Abstract
Agents, language model (LM)-based systems that are capable of reasoning, planning, and acting are becoming the dominant paradigm for real-world AI applications. Despite this widespread adoption, the principles that determine their performance remain underexplored, leaving practitioners to rely on heuristics rather than principled design choices. We address this gap by deriving quantitative scaling principles for agent systems. We evaluate this across four diverse benchmarks: Finance-Agent, BrowseComp-Plus, PlanCraft, and Workbench. Using five canonical architectures (Single, Independent, Centralized, Decentralized, Hybrid) instantiated across three LLM families, we perform a controlled evaluation spanning 180 configurations with standardized tools and token budgets. We derive a predictive model using empirical coordination metrics, including efficiency, overhead, error amplification, and redundancy, that achieves cross-validated R^2=0.513. We identify three dominant effects: (1) a tool-coordination trade-off: under fixed computational budgets, tool-heavy tasks suffer disproportionately from multi-agent overhead. (2) a capability saturation: coordination yields diminishing or negative returns (beta=-0.408, p<0.001) once single-agent baselines exceed ~45%. (3) topology-dependent error amplification: independent agents amplify errors 17.2x through unchecked propagation, while centralized coordination contains this to 4.4x. Centralized coordination improves performance by 80.9% on parallelizable tasks like financial reasoning, while decentralized coordination excels on dynamic web navigation (+9.2% vs. +0.2%). Yet for sequential reasoning tasks, all multi-agent variants degraded performance by 39-70%. The framework predicts the optimal coordination strategy for 87% of held-out configurations, providing a predictive principle of agentic scaling based on measurable task properties.
Problem

Research questions and friction points this paper is trying to address.

Deriving quantitative scaling principles for agent systems
Addressing reliance on heuristics in agent system design
Predicting optimal coordination strategies for agent architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Derived quantitative scaling principles for agent systems
Used empirical coordination metrics to create predictive model
Identified optimal coordination strategies based on task properties
Yubin Kim
Yubin Kim
MIT
Health AIAI SafetyAgents
Ken Gu
Ken Gu
Paul G. Allen School of Computer Science & Engineering, University of Washington
Data ScienceNatural Language ProcessingHuman-Computer Interaction
C
Chanwoo Park
Massachusetts Institute of Technology
Chunjong Park
Chunjong Park
Google DeepMind
Samuel Schmidgall
Samuel Schmidgall
Google DeepMind
AI AgentsLLM agentsLarge Language ModelsMedical AI
A
A. Ali Heydari
Google Research
Yao Yan
Yao Yan
University of Electronic Science and Technology of China
cutting chattercapsule robotexoskeleton robot
Zhihan Zhang
Zhihan Zhang
PhD student, University of Notre Dame
Natural Language Processing
Yuchen Zhuang
Yuchen Zhuang
Google DeepMind
Reinforcement LearningLarge Language ModelsAgentic Coding
M
Mark Malhotra
Google Research
P
Paul Pu Liang
Massachusetts Institute of Technology
Hae Won Park
Hae Won Park
MIT
Human-robot InteractionArtificial IntelligenceSocially Interactive AgentsConversational AgentsEmbodied Social Intellige
Y
Yuzhe Yang
Google Research
Xuhai Xu
Xuhai Xu
Assistant Professor, Columbia University | Google
Human-Computer InteractionUbiquitous ComputingHuman-Centered AImHealthHealth Informatics
Yilun Du
Yilun Du
Harvard University
Artificial IntelligenceMachine LearningRoboticsComputer Vision
Shwetak Patel
Shwetak Patel
University of Washington, Washington Research Foundation Endowed Professor, Computer Science
Ubiquitous ComputingHuman-Computer InteractionSensorsEmbedded Systems
Tim Althoff
Tim Althoff
Associate Professor of Computer Science, University of Washington
Human AI InteractionNatural Language ProcessingBehavioral Data ScienceAI for Mental Health
Daniel McDuff
Daniel McDuff
Google and University of Washington
Affective ComputingDeep LearningHuman-Computer InteractionHuman-Centered AIComputer Vision
X
Xin Liu
Google Research