π€ AI Summary
To address the insufficient robustness of regional traffic signal control under fluctuating demand, this paper proposes a single-agent reinforcement learning framework to circumvent the complexity of multi-agent coordination. Methodologically, it innovatively integrates the DreamerV3 world model with the road networkβs adjacency matrix to jointly encode both topological structure and real-time queue states. A sparse reward mechanism is designed around queue dissipation, augmented by probe-vehicle-data-driven dynamic queue estimation and feedback-based phase adjustment. Evaluated in SUMO simulations under 10%β30% OD demand fluctuations, the approach significantly reduces average intersection queue length, demonstrating strong robustness and dynamic adaptability. The framework establishes a scalable and interpretable paradigm for adaptive signal control in complex, time-varying traffic environments.
π Abstract
Traffic congestion, primarily driven by intersection queuing, significantly impacts urban living standards, safety, environmental quality, and economic efficiency. While Traffic Signal Control (TSC) systems hold potential for congestion mitigation, traditional optimization models often fail to capture real-world traffic complexity and dynamics. This study introduces a novel single-agent reinforcement learning (RL) framework for regional adaptive TSC, circumventing the coordination complexities inherent in multi-agent systems through a centralized decision-making paradigm. The model employs an adjacency matrix to unify the encoding of road network topology, real-time queue states derived from probe vehicle data, and current signal timing parameters. Leveraging the efficient learning capabilities of the DreamerV3 world model, the agent learns control policies where actions sequentially select intersections and adjust their signal phase splits to regulate traffic inflow/outflow, analogous to a feedback control system. Reward design prioritizes queue dissipation, directly linking congestion metrics (queue length) to control actions. Simulation experiments conducted in SUMO demonstrate the model's effectiveness: under inference scenarios with multi-level (10%, 20%, 30%) Origin-Destination (OD) demand fluctuations, the framework exhibits robust anti-fluctuation capability and significantly reduces queue lengths. This work establishes a new paradigm for intelligent traffic control compatible with probe vehicle technology. Future research will focus on enhancing practical applicability by incorporating stochastic OD demand fluctuations during training and exploring regional optimization mechanisms for contingency events.