Gradientsys: A Multi-Agent LLM Scheduler with ReAct Orchestration

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in multi-agent systems—namely, inefficient parallel scheduling of heterogeneous tasks, poor transparency, and weak fault tolerance—by proposing an efficient, scalable dynamic scheduling framework. Methodologically, it establishes a closed-loop planning mechanism grounded in the ReAct paradigm; introduces a typed Model-Context Protocol (MCP) to uniformly coordinate heterogeneous AI agents; employs an LLM-driven intelligent scheduler for one-to-many task dispatching, real-time retry, and re-planning; and integrates an SSE-based observability layer to ensure execution transparency. The core contribution lies in the deep integration of dynamic planning, protocol-governed context interaction, and LLM-based scheduling, enabling synergistic reuse of diverse tools—including PDF parsing, web search, and GUI control. Evaluated on the GAIA benchmark, the framework significantly outperforms the MinionS baseline in task success rate while reducing latency and API call costs, demonstrating superior scalability, parallel efficiency, and system observability.

Technology Category

Application Category

📝 Abstract
We present Gradientsys, a next-generation multi-agent scheduling framework that coordinates diverse specialized AI agents using a typed Model-Context Protocol (MCP) and a ReAct-based dynamic planning loop. At its core, Gradientsys employs an LLM-powered scheduler for intelligent one-to-many task dispatch, enabling parallel execution of heterogeneous agents such as PDF parsers, web search modules, GUI controllers, and web builders. The framework supports hybrid synchronous/asynchronous execution, respects agent capacity constraints, and incorporates a robust retry-and-replan mechanism to handle failures gracefully. To promote transparency and trust, Gradientsys includes an observability layer streaming real-time agent activity and intermediate reasoning via Server-Sent Events (SSE). We offer an architectural overview and evaluate Gradientsys against existing frameworks in terms of extensibility, scheduling topology, tool reusability, parallelism, and observability. Experiments on the GAIA general-assistant benchmark show that Gradientsys achieves higher task success rates with reduced latency and lower API costs compared to a MinionS-style baseline, demonstrating the strength of its LLM-driven multi-agent orchestration.
Problem

Research questions and friction points this paper is trying to address.

Coordinates diverse AI agents using MCP and ReAct planning
Enables parallel execution of heterogeneous agents efficiently
Improves task success rates with lower latency and costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent scheduling with ReAct orchestration
LLM-powered scheduler for parallel task dispatch
Real-time observability via Server-Sent Events
🔎 Similar Papers
No similar papers found.