🤖 AI Summary
To address the challenges of unsafe exploration in reinforcement learning (RL), severe distributional shift in imitation learning (IL), and excessive reliance on high-frequency human intervention in existing human-robot collaboration methods for autonomous driving, this paper proposes a confidence-guided human-robot collaboration framework. Methodologically, it integrates Distributional Soft Actor-Critic (DSAC), return distribution modeling, shared control, and confidence estimation. Key contributions include: (1) the first distributed agent value propagation mechanism; (2) a dynamic human-robot switching function grounded in policy confidence; and (3) seamless fusion of infrequent human guidance with self-driven RL. Evaluated across diverse simulation scenarios and real-world road tests, the framework significantly improves both safety and traffic efficiency, achieving state-of-the-art performance in the domain.
📝 Abstract
Autonomous driving promises significant advancements in mobility, road safety and traffic efficiency, yet reinforcement learning and imitation learning face safe-exploration and distribution-shift challenges. Although human-AI collaboration alleviates these issues, it often relies heavily on extensive human intervention, which increases costs and reduces efficiency. This paper develops a confidence-guided human-AI collaboration (C-HAC) strategy to overcome these limitations. First, C-HAC employs a distributional proxy value propagation method within the distributional soft actor-critic (DSAC) framework. By leveraging return distributions to represent human intentions C-HAC achieves rapid and stable learning of human-guided policies with minimal human interaction. Subsequently, a shared control mechanism is activated to integrate the learned human-guided policy with a self-learning policy that maximizes cumulative rewards. This enables the agent to explore independently and continuously enhance its performance beyond human guidance. Finally, a policy confidence evaluation algorithm capitalizes on DSAC's return distribution networks to facilitate dynamic switching between human-guided and self-learning policies via a confidence-based intervention function. This ensures the agent can pursue optimal policies while maintaining safety and performance guarantees. Extensive experiments across diverse driving scenarios reveal that C-HAC significantly outperforms conventional methods in terms of safety, efficiency, and overall performance, achieving state-of-the-art results. The effectiveness of the proposed method is further validated through real-world road tests in complex traffic conditions. The videos and code are available at: https://github.com/lzqw/C-HAC.