🤖 AI Summary
Existing adversarial scenario generation methods struggle to flexibly balance adversariality and realism, resulting in limited customization capability for safety evaluation. To address this challenge in autonomous driving safety assessment, we propose SAGE—a controllable adversarial scenario generation framework. SAGE formulates scenario generation as a multi-objective preference alignment problem and introduces a novel hierarchical group-wise preference optimization strategy. Leveraging a dual-expert model architecture and grounded in linear mode connectivity theory, SAGE constructs a continuous policy spectrum via weight linear interpolation, enabling dynamic, retraining-free adjustment at test time. Experimental results demonstrate that SAGE significantly outperforms baseline methods in balancing adversariality and realism, while effectively enhancing the closed-loop training performance of driving policies.
📝 Abstract
Adversarial scenario generation is a cost-effective approach for safety assessment of autonomous driving systems. However, existing methods are often constrained to a single, fixed trade-off between competing objectives such as adversariality and realism. This yields behavior-specific models that cannot be steered at inference time, lacking the efficiency and flexibility to generate tailored scenarios for diverse training and testing requirements. In view of this, we reframe the task of adversarial scenario generation as a multi-objective preference alignment problem and introduce a new framework named extbf{S}teerable extbf{A}dversarial scenario extbf{GE}nerator (SAGE). SAGE enables fine-grained test-time control over the trade-off between adversariality and realism without any retraining. We first propose hierarchical group-based preference optimization, a data-efficient offline alignment method that learns to balance competing objectives by decoupling hard feasibility constraints from soft preferences. Instead of training a fixed model, SAGE fine-tunes two experts on opposing preferences and constructs a continuous spectrum of policies at inference time by linearly interpolating their weights. We provide theoretical justification for this framework through the lens of linear mode connectivity. Extensive experiments demonstrate that SAGE not only generates scenarios with a superior balance of adversariality and realism but also enables more effective closed-loop training of driving policies. Project page: https://tongnie.github.io/SAGE/.