🤖 AI Summary
This work addresses the challenge of safe interaction in autonomous driving under complex traffic conditions, where the multimodal ambiguity of human behavior complicates decision-making. Conventional approaches often decouple prediction and planning, limiting their effectiveness. To overcome this, the paper proposes a hierarchical belief model that integrates Bayesian inference for multi-resolution intention understanding. It uniquely unifies active probing, intention shaping, and Conditional Value-at-Risk (CVaR) constraints within the belief space, enabling the ego vehicle to actively elicit and influence human behaviors while maintaining rigorous risk control. Evaluated in lane-merging and unsignalized intersection scenarios, the method significantly improves task success rates and reduces completion time, demonstrating interpretable, safe, and efficient interaction with multimodal human drivers.
📝 Abstract
Autonomous driving in complex traffic requires reasoning under uncertainty. Common approaches rely on prediction-based planning or risk-aware control, but these are typically treated in isolation, limiting their ability to capture the coupled nature of action and inference in interactive settings. This gap becomes especially critical in uncertain scenarios, where simply reacting to predictions can lead to unsafe maneuvers or overly conservative behavior. Our central insight is that safe interaction requires not only estimating human behavior but also shaping it when ambiguity poses risks. To this end, we introduce a hierarchical belief model that structures human behavior across coarse discrete intents and fine motion modes, updated via Bayesian inference for interpretable multi-resolution reasoning. On top of this, we develop an active probing strategy that identifies when multimodal ambiguity in human predictions may compromise safety and plans disambiguating actions that both reveal intent and gently steer human decisions toward safer outcomes. Finally, a runtime risk-evaluation layer based on Conditional Value-at-Risk (CVaR) ensures that all probing actions remain within human risk tolerance during influence. Our simulations in lane-merging and unsignaled intersection scenarios demonstrate that our approach achieves higher success rates and shorter completion times compared to existing methods. These results highlight the benefit of coupling belief inference, probing, and risk monitoring, yielding a principled and interpretable framework for planning under uncertainty.