LoRD: Adapting Differentiable Driving Policies to Distribution Shifts

📅 2024-10-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autonomous vehicles suffer from generalization failure under distributional shift, and existing methods predominantly focus on open-loop motion prediction while neglecting catastrophic forgetting. This paper addresses the end-to-end differentiable autonomous stack—encompassing prediction, planning, and control—and proposes the first online adaptation framework evaluated under closed-loop conditions. We introduce a low-rank residual decoder (LoRD) for parameter-efficient adaptation and a multi-task joint fine-tuning mechanism to co-optimize all stack modules. Extensive evaluation on nuPlan and exiD closed-loop out-of-distribution (OOD) simulations reveals a substantial performance gap between open-loop and closed-loop settings. Experiments demonstrate that our method reduces forgetting by 23.33% relative to standard fine-tuning and improves closed-loop driving score by 9.93%. This work establishes a new paradigm for distributionally robust, differentiable driving systems.

Technology Category

Application Category

📝 Abstract
Distribution shifts between operational domains can severely affect the performance of learned models in self-driving vehicles (SDVs). While this is a well-established problem, prior work has mostly explored naive solutions such as fine-tuning, focusing on the motion prediction task. In this work, we explore novel adaptation strategies for differentiable autonomy stacks consisting of prediction, planning, and control, perform evaluation in closed-loop, and investigate the often-overlooked issue of catastrophic forgetting. Specifically, we introduce two simple yet effective techniques: a low-rank residual decoder (LoRD) and multi-task fine-tuning. Through experiments across three models conducted on two real-world autonomous driving datasets (nuPlan, exiD), we demonstrate the effectiveness of our methods and highlight a significant performance gap between open-loop and closed-loop evaluation in prior approaches. Our approach improves forgetting by up to 23.33% and the closed-loop OOD driving score by 9.93% in comparison to standard fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Adapting self-driving models to distribution shifts
Addressing catastrophic forgetting in autonomy stacks
Improving closed-loop performance in OOD driving
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-rank residual decoder (LoRD) for adaptation
Multi-task fine-tuning to enhance performance
Closed-loop evaluation to assess driving policies
🔎 Similar Papers
No similar papers found.