Robust Single-Agent Reinforcement Learning for Regional Traffic Signal Control Under Demand Fluctuations

πŸ“… 2025-11-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the insufficient robustness of regional traffic signal control under fluctuating demand, this paper proposes a single-agent reinforcement learning framework to circumvent the complexity of multi-agent coordination. Methodologically, it innovatively integrates the DreamerV3 world model with the road network’s adjacency matrix to jointly encode both topological structure and real-time queue states. A sparse reward mechanism is designed around queue dissipation, augmented by probe-vehicle-data-driven dynamic queue estimation and feedback-based phase adjustment. Evaluated in SUMO simulations under 10%–30% OD demand fluctuations, the approach significantly reduces average intersection queue length, demonstrating strong robustness and dynamic adaptability. The framework establishes a scalable and interpretable paradigm for adaptive signal control in complex, time-varying traffic environments.

Technology Category

Application Category

πŸ“ Abstract
Traffic congestion, primarily driven by intersection queuing, significantly impacts urban living standards, safety, environmental quality, and economic efficiency. While Traffic Signal Control (TSC) systems hold potential for congestion mitigation, traditional optimization models often fail to capture real-world traffic complexity and dynamics. This study introduces a novel single-agent reinforcement learning (RL) framework for regional adaptive TSC, circumventing the coordination complexities inherent in multi-agent systems through a centralized decision-making paradigm. The model employs an adjacency matrix to unify the encoding of road network topology, real-time queue states derived from probe vehicle data, and current signal timing parameters. Leveraging the efficient learning capabilities of the DreamerV3 world model, the agent learns control policies where actions sequentially select intersections and adjust their signal phase splits to regulate traffic inflow/outflow, analogous to a feedback control system. Reward design prioritizes queue dissipation, directly linking congestion metrics (queue length) to control actions. Simulation experiments conducted in SUMO demonstrate the model's effectiveness: under inference scenarios with multi-level (10%, 20%, 30%) Origin-Destination (OD) demand fluctuations, the framework exhibits robust anti-fluctuation capability and significantly reduces queue lengths. This work establishes a new paradigm for intelligent traffic control compatible with probe vehicle technology. Future research will focus on enhancing practical applicability by incorporating stochastic OD demand fluctuations during training and exploring regional optimization mechanisms for contingency events.
Problem

Research questions and friction points this paper is trying to address.

Develops reinforcement learning for adaptive regional traffic signal control
Addresses traffic congestion under fluctuating travel demand conditions
Uses centralized decision-making to coordinate multiple intersection signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single-agent reinforcement learning for centralized traffic control
Adjacency matrix encoding network topology and queue states
DreamerV3 world model enabling efficient policy learning
πŸ”Ž Similar Papers
No similar papers found.
Q
Qiang Li
College of Urban Transportation and Logistics, Shenzhen Technology University, Shenzhen, Guangdong 518118, China
J
Jin Niu
2 and Lina Yu* 1 College of Urban Transportation and Logistics, Shenzhen Technology University, Shenzhen, Guangdong 518118, China
Lina Yu
Lina Yu
Shenzhen Technology University
Humanitarian logisticsResource allocationDynamic programmingReinforcement learning