RDAR: Reward-Driven Agent Relevance Estimation for Autonomous Driving

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autonomous driving systems suffer from high computational overhead: while existing attention mechanisms implicitly filter interacting agents, their quadratic complexity O(n²) hinders real-time inference in dense traffic scenarios. This paper proposes a reinforcement learning–based dynamic agent selection method that formulates critical agent identification as a Markov decision process. A reward-driven policy learns the relevance of each dynamic agent (e.g., vehicles, pedestrians) to ego-vehicle behavior and generates binary saliency masks. Integrated with a pre-trained behavioral model, the approach enables efficient attention pruning while preserving safety, traffic throughput, and overall progress. Experiments on large-scale driving datasets demonstrate substantial input dimensionality reduction—up to an order of magnitude—without degrading decision-making performance. The method establishes a new paradigm for lightweight, interpretable autonomous driving decision-making.

Technology Category

Application Category

📝 Abstract
Human drivers focus only on a handful of agents at any one time. On the other hand, autonomous driving systems process complex scenes with numerous agents, regardless of whether they are pedestrians on a crosswalk or vehicles parked on the side of the road. While attention mechanisms offer an implicit way to reduce the input to the elements that affect decisions, existing attention mechanisms for capturing agent interactions are quadratic, and generally computationally expensive. We propose RDAR, a strategy to learn per-agent relevance -- how much each agent influences the behavior of the controlled vehicle -- by identifying which agents can be excluded from the input to a pre-trained behavior model. We formulate the masking procedure as a Markov Decision Process where the action consists of a binary mask indicating agent selection. We evaluate RDAR on a large-scale driving dataset, and demonstrate its ability to learn an accurate numerical measure of relevance by achieving comparable driving performance, in terms of overall progress, safety and performance, while processing significantly fewer agents compared to a state of the art behavior model.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational complexity in autonomous driving scene processing
Identifying which agents influence the controlled vehicle's behavior
Maintaining driving performance while processing significantly fewer agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns per-agent relevance using reward-driven estimation
Formulates agent masking as Markov Decision Process
Achieves comparable performance with fewer processed agents
🔎 Similar Papers
2024-04-122024 IEEE Intelligent Vehicles Symposium (IV)Citations: 8
C
Carlo Bosio
UC Berkeley
G
Greg Woelki
Zoox Inc.
N
Noureldin Hendy
Zoox Inc.
Nicholas Roy
Nicholas Roy
MIT
RoboticsMachine LearningHuman-Robot InteractionMicro Air Vehicles
Byungsoo Kim
Byungsoo Kim
Zoox Inc.