🤖 AI Summary
Existing conformal prediction frameworks assume independence among surrounding agents, rendering them inadequate for closed-loop multi-agent systems where endogenous distributional shift arises from the ego agent’s uncertain perception and consequent behavioral uncertainty. This paper formally models such shift for the first time and proposes an iterative, well-calibrated prediction framework that tightly couples uncertainty quantification with closed-loop control to jointly ensure probabilistic safety and behavioral efficacy during dynamic interaction. Leveraging convergence theory, our method designs an iterative calibration mechanism that explicitly models non-ego agents’ strategic responses to the ego agent’s policy. Evaluated in 2–3-agent simulations, our approach improves task success rate by up to 9.6% over baselines, significantly reduces collision frequency, and avoids excessive conservatism. To the best of our knowledge, this is the first motion planning framework for interactive autonomous systems that provides adaptive, uncertainty-aware planning with rigorous theoretical guarantees.
📝 Abstract
Uncertainty-aware prediction is essential for safe motion planning, especially when using learned models to forecast the behavior of surrounding agents. Conformal prediction is a statistical tool often used to produce uncertainty-aware prediction regions for machine learning models. Most existing frameworks utilizing conformal prediction-based uncertainty predictions assume that the surrounding agents are non-interactive. This is because in closed-loop, as uncertainty-aware agents change their behavior to account for prediction uncertainty, the surrounding agents respond to this change, leading to a distribution shift which we call endogenous distribution shift. To address this challenge, we introduce an iterative conformal prediction framework that systematically adapts the uncertainty-aware ego-agent controller to the endogenous distribution shift. The proposed method provides probabilistic safety guarantees while adapting to the evolving behavior of reactive, non-ego agents. We establish a model for the endogenous distribution shift and provide the conditions for the iterative conformal prediction pipeline to converge under such a distribution shift. We validate our framework in simulation for 2- and 3- agent interaction scenarios, demonstrating collision avoidance without resulting in overly conservative behavior and an overall improvement in success rates of up to 9.6% compared to other conformal prediction-based baselines.