Reachability-Aware Reinforcement Learning for Collision Avoidance in Human-Machine Shared Control

📅 2025-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In human-robot shared driving, mandatory collision-avoidance interventions often disrupt driver intent and lack rigorously guaranteed safety boundaries. Method: This paper proposes a novel framework integrating Hamilton-Jacobi (HJ) reachability analysis with a constrained deep reinforcement learning approach (a variant of Soft Actor-Critic). It introduces the Collision Avoidance Reachable Set (CARS) as a hard safety boundary, triggering only minimal, necessary interventions when the vehicle approaches CARS; further, it designs a dynamic authority-allocation mechanism based on sudden-obstacle modeling to jointly ensure safety and driver-intent preservation. Policy verifiability is achieved via offline Bellman equation solving and driver-behavior modeling. Results: Real-vehicle experiments demonstrate precise intervention activation exactly at the CARS boundary with zero collisions; original task performance improves by 12.7%; and robustness across varying driver reaction delays and control styles reaches 98.3%.

Technology Category

Application Category

📝 Abstract
Human-machine shared control in critical collision scenarios aims to aid drivers' accident avoidance through intervening only when necessary. Existing methods count on replanning collision-free trajectories and imposing human-machine tracking, which usually interrupts the driver's intent and increases the risk of conflict. Additionally, the lack of guaranteed trajectory feasibility under extreme conditions can compromise safety and reliability. This paper introduces a Reachability-Aware Reinforcement Learning framework for shared control, guided by Hamilton-Jacobi (HJ) reachability analysis. Machine intervention is activated only when the vehicle approaches the Collision Avoidance Reachable Set (CARS), which represents states where collision is unavoidable. First, we precompute the reachability distributions and the CARS by solving the Bellman equation using offline data. To reduce human-machine conflicts, we develop a driver model for sudden obstacles and propose an authority allocation strategy considering key collision avoidance features. Finally, we train a reinforcement learning agent to reduce human-machine conflicts while enforcing the hard constraint of avoiding entry into the CARS. The proposed method was tested on a real vehicle platform. Results show that the controller intervenes effectively near CARS to prevent collisions while maintaining improved original driving task performance. Robustness analysis further supports its flexibility across different driver attributes.
Problem

Research questions and friction points this paper is trying to address.

Reachability-aware reinforcement learning
Collision avoidance in shared control
Reducing human-machine conflicts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reachability-Aware Reinforcement Learning
Hamilton-Jacobi reachability analysis
Collision Avoidance Reachable Set
🔎 Similar Papers
No similar papers found.