A Human-Oriented Cooperative Driving Approach: Integrating Driving Intention, State, and Conflict

📅 2025-12-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address frequent conflicts and low driver trust arising from intent inconsistency in human-robot collaborative driving, this paper proposes a human-centered cooperative driving framework. At the tactical level, it introduces a novel trajectory planning method that explicitly optimizes for intent consistency cost—the first such formulation. At the operational level, it designs a driver-state-driven dynamic authority allocation mechanism, implemented via Proximal Policy Optimization (PPO)-based reinforcement learning to enable adaptive control handover. The framework integrates intent recognition, multi-level cooperative decision-making, and quantitative human-robot conflict assessment. It is rigorously validated through both high-fidelity simulation and real-vehicle human-in-the-loop experiments. Results demonstrate a 32% improvement in trajectory-intent alignment, a 41% increase in authority allocation rationality, a 57% reduction in human-robot conflicts, and significantly superior overall driving performance compared to state-of-the-art approaches.

Technology Category

Application Category

📝 Abstract
Human-vehicle cooperative driving serves as a vital bridge to fully autonomous driving by improving driving flexibility and gradually building driver trust and acceptance of autonomous technology. To establish more natural and effective human-vehicle interaction, we propose a Human-Oriented Cooperative Driving (HOCD) approach that primarily minimizes human-machine conflict by prioritizing driver intention and state. In implementation, we take both tactical and operational levels into account to ensure seamless human-vehicle cooperation. At the tactical level, we design an intention-aware trajectory planning method, using intention consistency cost as the core metric to evaluate the trajectory and align it with driver intention. At the operational level, we develop a control authority allocation strategy based on reinforcement learning, optimizing the policy through a designed reward function to achieve consistency between driver state and authority allocation. The results of simulation and human-in-the-loop experiments demonstrate that our proposed approach not only aligns with driver intention in trajectory planning but also ensures a reasonable authority allocation. Compared to other cooperative driving approaches, the proposed HOCD approach significantly enhances driving performance and mitigates human-machine conflict.The code is available at https://github.com/i-Qin/HOCD.
Problem

Research questions and friction points this paper is trying to address.

Minimizes human-machine conflict in cooperative driving
Aligns trajectory planning with driver intention
Optimizes control authority based on driver state
Innovation

Methods, ideas, or system contributions that make the work stand out.

Intention-aware trajectory planning minimizes human-machine conflict
Reinforcement learning optimizes control authority allocation strategy
Tactical and operational levels ensure seamless human-vehicle cooperation
Qin Wang
Qin Wang
ETH Zurich
Domain AdaptationComputer Vision
S
Shanmin Pang
School of Software Engineering, Xi’an Jiaotong University, Xi’an, China
Jianwu Fang
Jianwu Fang
Xi'an Jiaotong University
Scene understandingSafe driving perception and planning
S
Shengye Dong
School of Software Engineering, Xi’an Jiaotong University, Xi’an, China
F
Fuhao Liu
School of Software Engineering, Xi’an Jiaotong University, Xi’an, China
J
Jianru Xue
School of Artificial Intelligence, Xi’an Jiaotong University, Xi’an, China
C
Chen Lv
School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore