🤖 AI Summary
This work proposes an offline model-based reinforcement learning approach that integrates engineering priors to address the inefficiency of traditional vehicle braking controllers, which rely heavily on manual calibration. By leveraging data-driven techniques, the method constructs a high-fidelity vehicle dynamics model and subsequently optimizes braking strategies without requiring online interaction with the environment. The resulting controller achieves braking performance comparable to production-grade anti-lock braking systems (ABS) in real-world braking tasks, demonstrating significant potential as a viable alternative to current industry-standard systems.
📝 Abstract
Braking system, the key module to ensure the safety and steer-ability of current vehicles, relies on extensive manual calibration during production. Reducing labor and time consumption while maintaining the Vehicle Braking Controller (VBC) performance greatly benefits the vehicle industry. Model-based methods in offline reinforcement learning, which facilitate policy exploration within a data-driven dynamics model, offer a promising solution for addressing real-world control tasks. This work proposes ReinVBC, which applies an offline model-based reinforcement learning approach to deal with the vehicle braking control problem. We introduce useful engineering designs into the paradigm of model learning and utilization to obtain a reliable vehicle dynamics model and a capable braking policy. Several results demonstrate the capability of our method in real-world vehicle braking and its potential to replace the production-grade anti-lock braking system.