CaRL: Learning Scalable Planning Policies with Simple Rewards

📅 2025-04-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor generalizability of rule-based planning in autonomous driving’s long-tail scenarios and the instability of existing multi-component reward designs for reinforcement learning under large-batch distributed training, this paper proposes a minimalist yet highly effective single-scalar reward formulation: routing completion as the core supervisory signal, augmented by violation-triggered episode termination and multiplicative reward decay. Departing from conventional complex reward shaping, our approach is the first to theoretically and empirically demonstrate—within the PPO framework—that a single reward term significantly enhances training stability and scalability in large-scale distributed settings. We validate its superiority on CARLA (Longest6 v2, 64 DS) and nuPlan (Val14), achieving scores of 91.3 and 90.6 for non-reactive and reactive traffic, respectively. Moreover, inference latency improves by an order of magnitude over prior methods.

Technology Category

Application Category

📝 Abstract
We investigate reinforcement learning (RL) for privileged planning in autonomous driving. State-of-the-art approaches for this task are rule-based, but these methods do not scale to the long tail. RL, on the other hand, is scalable and does not suffer from compounding errors like imitation learning. Contemporary RL approaches for driving use complex shaped rewards that sum multiple individual rewards, eg~progress, position, or orientation rewards. We show that PPO fails to optimize a popular version of these rewards when the mini-batch size is increased, which limits the scalability of these approaches. Instead, we propose a new reward design based primarily on optimizing a single intuitive reward term: route completion. Infractions are penalized by terminating the episode or multiplicatively reducing route completion. We find that PPO scales well with higher mini-batch sizes when trained with our simple reward, even improving performance. Training with large mini-batch sizes enables efficient scaling via distributed data parallelism. We scale PPO to 300M samples in CARLA and 500M samples in nuPlan with a single 8-GPU node. The resulting model achieves 64 DS on the CARLA longest6 v2 benchmark, outperforming other RL methods with more complex rewards by a large margin. Requiring only minimal adaptations from its use in CARLA, the same method is the best learning-based approach on nuPlan. It scores 91.3 in non-reactive and 90.6 in reactive traffic on the Val14 benchmark while being an order of magnitude faster than prior work.
Problem

Research questions and friction points this paper is trying to address.

RL for scalable autonomous driving planning policies
Addressing limitations of rule-based and imitation learning methods
Optimizing simple rewards to improve PPO scalability and performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses simple route completion reward design
Scales PPO with large mini-batch sizes
Distributed data parallelism for efficiency
🔎 Similar Papers
No similar papers found.