🤖 AI Summary
In mixed-autonomy traffic scenarios where vehicle-to-vehicle communication is unavailable, autonomous vehicles struggle to accurately infer surrounding human drivers’ yielding or overtaking intentions. To address this, we propose a lane-change decision-making framework that jointly integrates social intention modeling with deep reinforcement learning (DRL). Our approach innovatively represents inter-vehicle interactions as a directed acyclic graph (DAG), enabling an interpretable yielding-intention inference model that is end-to-end co-optimized with the DRL policy network. In high-fidelity simulations, the method significantly enhances lane-change safety: collision rate and hesitation duration decrease markedly, while successful safe lane-change rate improves by 23.6%. This work constitutes the first systematic integration of social intention modeling into autonomous lane-change decision-making, empirically demonstrating that explicit intention awareness yields substantial performance gains in human–autonomous vehicle cooperative driving.
📝 Abstract
Since the emergence of autonomous driving technology, it has advanced rapidly over the past decade. It is becoming increasingly likely that autonomous vehicles (AVs) would soon coexist with human-driven vehicles (HVs) on the roads. Currently, safety and reliable decision-making remain significant challenges, particularly when AVs are navigating lane changes and interacting with surrounding HVs. Therefore, precise estimation of the intentions of surrounding HVs can assist AVs in making more reliable and safe lane change decision-making. This involves not only understanding their current behaviors but also predicting their future motions without any direct communication. However, distinguishing between the passing and yielding intentions of surrounding HVs still remains ambiguous. To address the challenge, we propose a social intention estimation algorithm rooted in Directed Acyclic Graph (DAG), coupled with a decision-making framework employing Deep Reinforcement Learning (DRL) algorithms. To evaluate the method's performance, the proposed framework can be tested and applied in a lane-changing scenario within a simulated environment. Furthermore, the experiment results demonstrate how our approach enhances the ability of AVs to navigate lane changes safely and efficiently on roads.