Driving with Regulation: Interpretable Decision-Making for Autonomous Vehicles with Retrieval-Augmented Reasoning via LLM

📅 2024-10-07
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient regulatory compliance, social norm alignment, and safety explainability in cross-regional autonomous driving decision-making, this paper proposes an interpretable multi-source collaborative decision framework. Methodologically, it introduces the first Traffic Regulation Retrieval (TRR) Agent and an LLM-driven semantic parsing and reasoning module to automatically distinguish mandatory from advisory provisions and jointly evaluate regulatory compliance and safety. The framework integrates Retrieval-Augmented Generation (RAG), multi-source regulatory semantic retrieval, and structured explanation techniques. Experiments demonstrate robust performance in both synthetic and real-world scenarios, enabling zero-shot cross-regional adaptation. It significantly improves decision transparency, regulatory adherence rate, and safety margin. To our knowledge, this is the first decision paradigm for L4 autonomous driving that simultaneously ensures legal traceability and social acceptability through rigorous, human-interpretable reasoning.

Technology Category

Application Category

📝 Abstract
This work presents an interpretable decision-making framework for autonomous vehicles that integrates traffic regulations, norms, and safety guidelines comprehensively and enables seamless adaptation to different regions. While traditional rule-based methods struggle to incorporate the full scope of traffic rules, we develop a Traffic Regulation Retrieval (TRR) Agent based on Retrieval-Augmented Generation (RAG) to automatically retrieve relevant traffic rules and guidelines from extensive regulation documents and relevant records based on the ego vehicle's situation. Given the semantic complexity of the retrieved rules, we also design a reasoning module powered by a Large Language Model (LLM) to interpret these rules, differentiate between mandatory rules and safety guidelines, and assess actions on legal compliance and safety. Additionally, the reasoning is designed to be interpretable, enhancing both transparency and reliability. The framework demonstrates robust performance on both hypothesized and real-world cases across diverse scenarios, along with the ability to adapt to different regions with ease.
Problem

Research questions and friction points this paper is trying to address.

Develops interpretable decision-making for autonomous vehicles using traffic regulations.
Integrates retrieval-augmented reasoning to adapt to diverse regional traffic rules.
Enhances legal compliance and safety through LLM-powered rule interpretation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Traffic Regulation Retrieval Agent for rule retrieval
Large Language Model for semantic rule interpretation
Interpretable reasoning for legal and safety compliance
🔎 Similar Papers
No similar papers found.
T
Tianhui Cai
UCLA Mobility Lab and Mobility Center of Excellence, Los Angeles, Los Angeles, USA
Y
Yifan Liu
UCLA Mobility Lab and Mobility Center of Excellence, Los Angeles, Los Angeles, USA
Zewei Zhou
Zewei Zhou
University of California, Los Angeles
Deep learningComputer VisionAutonomous DrivingRobotics
Haoxuan Ma
Haoxuan Ma
University of California, Los Angeles
Intelligent Transportation SystemsMachine LearningAutomated Vehicle
Seth Z. Zhao
Seth Z. Zhao
University of California, Los Angeles
SimulationMulti-agent LearningAutonomous DrivingCooperative Driving
Z
Zhiwen Wu
UCLA Mobility Lab and Mobility Center of Excellence, Los Angeles, Los Angeles, USA
J
Jiaqi Ma
UCLA Mobility Lab and Mobility Center of Excellence, Los Angeles, Los Angeles, USA