Confidence-Guided Human-AI Collaboration: Reinforcement Learning with Distributional Proxy Value Propagation for Autonomous Driving

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of unsafe exploration in reinforcement learning (RL), severe distributional shift in imitation learning (IL), and excessive reliance on high-frequency human intervention in existing human-robot collaboration methods for autonomous driving, this paper proposes a confidence-guided human-robot collaboration framework. Methodologically, it integrates Distributional Soft Actor-Critic (DSAC), return distribution modeling, shared control, and confidence estimation. Key contributions include: (1) the first distributed agent value propagation mechanism; (2) a dynamic human-robot switching function grounded in policy confidence; and (3) seamless fusion of infrequent human guidance with self-driven RL. Evaluated across diverse simulation scenarios and real-world road tests, the framework significantly improves both safety and traffic efficiency, achieving state-of-the-art performance in the domain.

Technology Category

Application Category

📝 Abstract
Autonomous driving promises significant advancements in mobility, road safety and traffic efficiency, yet reinforcement learning and imitation learning face safe-exploration and distribution-shift challenges. Although human-AI collaboration alleviates these issues, it often relies heavily on extensive human intervention, which increases costs and reduces efficiency. This paper develops a confidence-guided human-AI collaboration (C-HAC) strategy to overcome these limitations. First, C-HAC employs a distributional proxy value propagation method within the distributional soft actor-critic (DSAC) framework. By leveraging return distributions to represent human intentions C-HAC achieves rapid and stable learning of human-guided policies with minimal human interaction. Subsequently, a shared control mechanism is activated to integrate the learned human-guided policy with a self-learning policy that maximizes cumulative rewards. This enables the agent to explore independently and continuously enhance its performance beyond human guidance. Finally, a policy confidence evaluation algorithm capitalizes on DSAC's return distribution networks to facilitate dynamic switching between human-guided and self-learning policies via a confidence-based intervention function. This ensures the agent can pursue optimal policies while maintaining safety and performance guarantees. Extensive experiments across diverse driving scenarios reveal that C-HAC significantly outperforms conventional methods in terms of safety, efficiency, and overall performance, achieving state-of-the-art results. The effectiveness of the proposed method is further validated through real-world road tests in complex traffic conditions. The videos and code are available at: https://github.com/lzqw/C-HAC.
Problem

Research questions and friction points this paper is trying to address.

Overcoming safe-exploration and distribution-shift challenges in autonomous driving
Reducing reliance on extensive human intervention in human-AI collaboration
Ensuring safety and performance via dynamic policy switching
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributional proxy value propagation in DSAC
Shared control mechanism integrates human and AI policies
Confidence-based dynamic switching between policies
🔎 Similar Papers
No similar papers found.
Z
Zeqiao Li
Tianjin Key Laboratory of Intelligent Unmanned Swarm Technology and System, School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
Y
Yijing Wang
Tianjin Key Laboratory of Intelligent Unmanned Swarm Technology and System, School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
H
Haoyu Wang
Tianjin Key Laboratory of Intelligent Unmanned Swarm Technology and System, School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China; Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai, 200240
Z
Zheng Li
Tianjin Key Laboratory of Intelligent Unmanned Swarm Technology and System, School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
P
Peng Li
Tianjin Key Laboratory of Intelligent Unmanned Swarm Technology and System, School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
Z
Zhiqiang Zuo
Tianjin Key Laboratory of Intelligent Unmanned Swarm Technology and System, School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
Chuan Hu
Chuan Hu
Associate Professor of Mechanical Engineering, Shanghai Jiao Tong University
Autonomous DrivingDecision and PlanningHMIHuman-AI CollaborationHiL RL