Behavior-Constrained Reinforcement Learning with Receding-Horizon Credit Assignment for High-Performance Control

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of reinforcement learning, which often deviates from human-like behavior, and imitation learning, which struggles to surpass demonstrator performance. The authors propose a behavior-constrained reinforcement learning framework that models short-horizon future trajectories via receding-horizon prediction and conditions policy learning on reference trajectories. Expert behavioral consistency is enforced at the trajectory distribution level rather than through point-wise matching. By integrating receding-horizon credit assignment with behavior constraints, the approach enables robust generalization under disturbances and varying operating conditions. Evaluated in a high-fidelity racing simulator using professional driver data, the trained policies achieve competitive lap times while closely replicating expert driving styles. Human-in-the-loop assessments further confirm the accurate reproduction of tuning-sensitive driving characteristics.
📝 Abstract
Learning high-performance control policies that remain consistent with expert behavior is a fundamental challenge in robotics. Reinforcement learning can discover high-performing strategies but often departs from desirable human behavior, whereas imitation learning is limited by demonstration quality and struggles to improve beyond expert data. We propose a behavior-constrained reinforcement learning framework that improves beyond demonstrations while explicitly controlling deviation from expert behavior. Because expert-consistent behavior in dynamic control is inherently trajectory-level, we introduce a receding-horizon predictive mechanism that models short-term future trajectories and provides look-ahead rewards during training. To account for the natural variability of human behavior under disturbances and changing conditions, we further condition the policy on reference trajectories, allowing it to represent a distribution of expert-consistent behaviors rather than a single deterministic target. Empirically, we evaluate the approach in high-fidelity race car simulation using data from professional drivers, a domain characterized by extreme dynamics and narrow performance margins. The learned policies achieve competitive lap times while maintaining close alignment with expert driving behavior, outperforming baseline methods in both performance and imitation quality. Beyond standard benchmarks, we conduct human-grounded evaluation in a driver-in-the-loop simulator and show that the learned policies reproduce setup-dependent driving characteristics consistent with the feedback of top-class professional race drivers. These results demonstrate that our method enables learning high-performance control policies that are both optimal and behavior-consistent, and can serve as reliable surrogates for human decision-making in complex control systems.
Problem

Research questions and friction points this paper is trying to address.

behavior-constrained reinforcement learning
expert behavior consistency
high-performance control
trajectory-level behavior
imitation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Behavior-Constrained Reinforcement Learning
Receding-Horizon Credit Assignment
Reference-Conditioned Policy
Trajectory-Level Imitation
High-Performance Control
🔎 Similar Papers
No similar papers found.