🤖 AI Summary
To address frequent conflicts and low driver trust arising from intent inconsistency in human-robot collaborative driving, this paper proposes a human-centered cooperative driving framework. At the tactical level, it introduces a novel trajectory planning method that explicitly optimizes for intent consistency cost—the first such formulation. At the operational level, it designs a driver-state-driven dynamic authority allocation mechanism, implemented via Proximal Policy Optimization (PPO)-based reinforcement learning to enable adaptive control handover. The framework integrates intent recognition, multi-level cooperative decision-making, and quantitative human-robot conflict assessment. It is rigorously validated through both high-fidelity simulation and real-vehicle human-in-the-loop experiments. Results demonstrate a 32% improvement in trajectory-intent alignment, a 41% increase in authority allocation rationality, a 57% reduction in human-robot conflicts, and significantly superior overall driving performance compared to state-of-the-art approaches.
📝 Abstract
Human-vehicle cooperative driving serves as a vital bridge to fully autonomous driving by improving driving flexibility and gradually building driver trust and acceptance of autonomous technology. To establish more natural and effective human-vehicle interaction, we propose a Human-Oriented Cooperative Driving (HOCD) approach that primarily minimizes human-machine conflict by prioritizing driver intention and state. In implementation, we take both tactical and operational levels into account to ensure seamless human-vehicle cooperation. At the tactical level, we design an intention-aware trajectory planning method, using intention consistency cost as the core metric to evaluate the trajectory and align it with driver intention. At the operational level, we develop a control authority allocation strategy based on reinforcement learning, optimizing the policy through a designed reward function to achieve consistency between driver state and authority allocation. The results of simulation and human-in-the-loop experiments demonstrate that our proposed approach not only aligns with driver intention in trajectory planning but also ensures a reasonable authority allocation. Compared to other cooperative driving approaches, the proposed HOCD approach significantly enhances driving performance and mitigates human-machine conflict.The code is available at https://github.com/i-Qin/HOCD.