Evaluating the Robustness of Reinforcement Learning based Adaptive Traffic Signal Control

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing reinforcement learning (RL) approaches for traffic signal control, which often lack robustness and training efficiency under complex phase structures and dynamic traffic demands, hindering real-world deployment. The authors propose an RL-based adaptive signal control algorithm tailored to the standard eight-phase ring-and-barrier structure, enhanced by a distributed asynchronous training architecture to improve learning efficiency. The method is systematically evaluated across diverse traffic volumes and origin-destination patterns to assess its generalization capability. Experimental results demonstrate that, in realistic intersection scenarios, the proposed approach reduces vehicle delay by 11%–32% compared to optimized actuated control. Notably, it maintains significant performance advantages even under unseen, highly heterogeneous traffic demands, thereby providing the first empirical validation of RL controller robustness and practicality within a realistic eight-phase signal framework.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has attracted increasing interest for adaptive traffic signal control due to its model-free ability to learn control policies directly from interaction with the traffic environment. However, several challenges remain before RL-based signal control can be considered ready for field deployment. Many existing studies rely on simplified signal timing structures, robustness of trained models under varying traffic demand conditions remains insufficiently evaluated, and runtime efficiency continues to pose challenges when training RL algorithms in traffic microscopic simulation environments. This study formulates an RL-based signal control algorithm capable of representing a full eight-phase ring-barrier configuration consistent with field signal controllers. The algorithm is trained and evaluated under varying traffic demand conditions and benchmarked against state-of-the-practice actuated signal control (ASC). To assess robustness, experiments are conducted across multiple traffic volumes and origin-destination (O-D) demand patterns with varying levels of structural similarity. To improve training efficiency, a distributed asynchronous training architecture is implemented that enables parallel simulation across multiple computing nodes. Results from a case study intersection show that the proposed RL-based signal control significantly outperforms optimized ASC, reducing average delay by 11-32% across movements. A model trained on a single O-D pattern generalizes well to similar unseen demand patterns but degrades under substantially different demand conditions. In contrast, a model trained on diverse O-D patterns demonstrates strong robustness, consistently outperforming ASC even under highly dissimilar unseen demand scenarios.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
Adaptive Traffic Signal Control
Robustness
Traffic Demand Variation
Generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

reinforcement learning
adaptive traffic signal control
robustness evaluation
distributed asynchronous training
ring-barrier configuration
🔎 Similar Papers
No similar papers found.
D
Dickens Kwesiga
School of Civil and Environmental Engineering, Georgia Institute of Technology, Atlanta, USA, 30332
Angshuman Guin
Angshuman Guin
Principal Research Engineer in Transportation Systems, Civil and Environmental Engineering, Georgia
Intelligent Transportation SystemsFreeway OperationsAutonomous VehiclesConnected Vehicle
K
Khaled Abdelghany
Department of Civil and Environmental Engineering, Southern Methodist University, Dallas, USA, 75205
Michael Hunter
Michael Hunter
Professor, Georgia Institute of Technology
TransportationOperationsSafetySimulationCivil Engineering