🤖 AI Summary
Existing chain-based AI systems for traffic management (e.g., TrafficGPT) suffer from sequential task execution, excessive token consumption, and poor scalability, limiting their effectiveness in complex urban traffic scenarios. To address these limitations, this work proposes a directed-graph-based collaborative AI agent framework centered on a Brain Agent, which automatically decomposes user queries, models inter-task dependencies, and performs context-aware dynamic resource scheduling. Specialized agents operate in parallel to handle data retrieval, analytical reasoning, visualization, and traffic simulation. A lightweight token management strategy further optimizes computational efficiency. Experimental evaluation demonstrates that, compared to TrafficGPT, the proposed framework reduces token consumption by 50.2%, decreases average response latency by 19.0%, and improves multi-query throughput by 23.0%, thereby significantly enhancing both system responsiveness and scalability.
📝 Abstract
Large Language Models (LLMs) offer significant promise for intelligent traffic management; however, current chain-based systems like TrafficGPT are hindered by sequential task execution, high token usage, and poor scalability, making them inefficient for complex, real-world scenarios. To address these limitations, we propose GraphTrafficGPT, a novel graph-based architecture, which fundamentally redesigns the task coordination process for LLM-driven traffic applications. GraphTrafficGPT represents tasks and their dependencies as nodes and edges in a directed graph, enabling efficient parallel execution and dynamic resource allocation. The main idea behind the proposed model is a Brain Agent that decomposes user queries, constructs optimized dependency graphs, and coordinates a network of specialized agents for data retrieval, analysis, visualization, and simulation. By introducing advanced context-aware token management and supporting concurrent multi-query processing, the proposed architecture handles interdependent tasks typical of modern urban mobility environments. Experimental results demonstrate that GraphTrafficGPT reduces token consumption by 50.2% and average response latency by 19.0% compared to TrafficGPT, while supporting simultaneous multi-query execution with up to 23.0% improvement in efficiency.