Safe Reinforcement Learning with a Predictive Safety Filter for Motion Planning and Control: A Drifting Vehicle Example

📅 2025-06-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In safety-critical autonomous driving scenarios—such as high-speed drifting on slippery roads and emergency obstacle avoidance—conventional motion planning methods struggle with the strong nonlinearity and instability of drifting dynamics, while existing learning-based approaches suffer from expert dependency, low exploration efficiency, and insufficient deployment safety. This paper proposes a deep reinforcement learning framework integrated with a Predictive Safety Filter (PSF), unifying model-based drift dynamics modeling, continuous-control policy learning, and real-time safety constraint enforcement. Evaluated via co-simulation in MATLAB-CarSim, the method guarantees collision-free operation throughout all test cases, reduces trajectory tracking error by over 30%, and significantly enhances control stability and online adaptability. It effectively addresses the dual challenge of efficient learning in high-dimensional continuous action spaces and reliable, safety-guaranteed deployment under stringent operational constraints.

Technology Category

Application Category

📝 Abstract
Autonomous drifting is a complex and crucial maneuver for safety-critical scenarios like slippery roads and emergency collision avoidance, requiring precise motion planning and control. Traditional motion planning methods often struggle with the high instability and unpredictability of drifting, particularly when operating at high speeds. Recent learning-based approaches have attempted to tackle this issue but often rely on expert knowledge or have limited exploration capabilities. Additionally, they do not effectively address safety concerns during learning and deployment. To overcome these limitations, we propose a novel Safe Reinforcement Learning (RL)-based motion planner for autonomous drifting. Our approach integrates an RL agent with model-based drift dynamics to determine desired drift motion states, while incorporating a Predictive Safety Filter (PSF) that adjusts the agent's actions online to prevent unsafe states. This ensures safe and efficient learning, and stable drift operation. We validate the effectiveness of our method through simulations on a Matlab-Carsim platform, demonstrating significant improvements in drift performance, reduced tracking errors, and computational efficiency compared to traditional methods. This strategy promises to extend the capabilities of autonomous vehicles in safety-critical maneuvers.
Problem

Research questions and friction points this paper is trying to address.

Ensuring safe autonomous drifting in unstable conditions
Overcoming limited exploration in learning-based motion planning
Integrating real-time safety adjustments for reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Safe Reinforcement Learning for drifting control
Predictive Safety Filter ensures online safety
Model-based drift dynamics for precise planning
🔎 Similar Papers
No similar papers found.