Adaptive Tuning of Parameterized Traffic Controllers via Multi-Agent Reinforcement Learning

πŸ“… 2025-12-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Conventional state-feedback controllers exhibit insufficient adaptability to time-varying traffic congestion scenarios. Method: This paper proposes a parameterized traffic controller adaptive tuning framework based on multi-agent reinforcement learning (MARL). It innovatively decouples high-frequency control execution from low-frequency parameter optimization to jointly ensure real-time responsiveness and environmental adaptability, and adopts a distributed multi-agent architecture to enhance robustness against local failures and system scalability. Results: Evaluated across diverse traffic network simulations, the framework enables dynamic online parameter adjustment. It significantly outperforms both uncontrolled and fixed-parameter baselines, achieves performance comparable to single-agent RL approaches, and demonstrates superior resilience and faster recovery under partial agent failures.

Technology Category

Application Category

πŸ“ Abstract
Effective traffic control is essential for mitigating congestion in transportation networks. Conventional traffic management strategies, including route guidance, ramp metering, and traffic signal control, often rely on state feedback controllers, used for their simplicity and reactivity; however, they lack the adaptability required to cope with complex and time-varying traffic dynamics. This paper proposes a multi-agent reinforcement learning framework in which each agent adaptively tunes the parameters of a state feedback traffic controller, combining the reactivity of state feedback controllers with the adaptability of reinforcement learning. By tuning parameters at a lower frequency rather than directly determining control actions at a high frequency, the reinforcement learning agents achieve improved training efficiency while maintaining adaptability to varying traffic conditions. The multi-agent structure further enhances system robustness, as local controllers can operate independently in the event of partial failures. The proposed framework is evaluated on a simulated multi-class transportation network under varying traffic conditions. Results show that the proposed multi-agent framework outperforms the no control and fixed-parameter state feedback control cases, while performing on par with the single-agent RL-based adaptive state feedback control, with a much better resilience to partial failures.
Problem

Research questions and friction points this paper is trying to address.

Adaptively tuning traffic controller parameters using multi-agent reinforcement learning
Combining state feedback reactivity with reinforcement learning adaptability
Enhancing traffic control robustness against partial system failures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent reinforcement learning tunes controller parameters
Lower frequency parameter tuning improves training efficiency
Multi-agent structure enhances system robustness and resilience
πŸ”Ž Similar Papers
No similar papers found.
G
Giray Γ–nΓΌr
Delft Center for Systems and Control, Delft University of Technology, Delft, The Netherlands
A
Azita Dabiri
Delft Center for Systems and Control, Delft University of Technology, Delft, The Netherlands
Bart De Schutter
Bart De Schutter
full professor & head of department, Delft Center for Systems and Control
control of large-scale systemsmulti-agent multi-level controlmachine learninghybrid systems