Balancing Multiple Objectives in Urban Traffic Control with Reinforcement Learning from AI Feedback

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel extension of reinforcement learning from AI feedback (RLAIF) to multi-objective adaptive traffic signal control, addressing the common limitation of existing approaches that prioritize a single dominant objective due to conflicting goals and fail to accommodate diverse user preferences. By leveraging large language models to generate multi-objective preference labels—without requiring manual annotation or intricate reward function engineering—the method trains adaptive control policies that reflect varying user priorities. Experimental results demonstrate that the proposed approach effectively balances competing objectives such as traffic efficiency, fairness, and energy consumption, significantly enhancing both the practicality and scalability of traffic signal control strategies in real-world urban environments.

Technology Category

Application Category

📝 Abstract
Reward design has been one of the central challenges for real world reinforcement learning (RL) deployment, especially in settings with multiple objectives. Preference-based RL offers an appealing alternative by learning from human preferences over pairs of behavioural outcomes. More recently, RL from AI feedback (RLAIF) has demonstrated that large language models (LLMs) can generate preference labels at scale, mitigating the reliance on human annotators. However, existing RLAIF work typically focuses only on single-objective tasks, leaving the open question of how RLAIF handles systems that involve multiple objectives. In such systems trade-offs among conflicting objectives are difficult to specify, and policies risk collapsing into optimizing for a dominant goal. In this paper, we explore the extension of the RLAIF paradigm to multi-objective self-adaptive systems. We show that multi-objective RLAIF can produce policies that yield balanced trade-offs reflecting different user priorities without laborious reward engineering. We argue that integrating RLAIF into multi-objective RL offers a scalable path toward user-aligned policy learning in domains with inherently conflicting objectives.
Problem

Research questions and friction points this paper is trying to address.

multi-objective reinforcement learning
RL from AI feedback
urban traffic control
preference-based learning
conflicting objectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-objective Reinforcement Learning
Reinforcement Learning from AI Feedback (RLAIF)
Large Language Models
Preference-based Learning
Urban Traffic Control
🔎 Similar Papers
No similar papers found.