🤖 AI Summary
This work addresses the challenges of low sample efficiency and poor semantic interpretability in reinforcement learning for autonomous driving, compounded by the high inference latency of vision-language models (VLMs) despite their rich semantic representations. To overcome these limitations, the authors propose an asynchronous batched inference framework that decouples VLMs from the reinforcement learning loop and introduces two key mechanisms: Value-margin Regularization (VMR) and Advantage-Weighted Action Guidance (AWAG) to efficiently distill VLM knowledge. Additionally, a conditional contrastive action alignment strategy is designed to mitigate CLIP’s representational blind spots in dynamic driving scenarios. The resulting lightweight policy achieves near real-time inference at approximately 500 FPS while matching the performance of billion-parameter VLMs, substantially improving both sample efficiency and semantic understanding.
📝 Abstract
Reinforcement Learning (RL) has emerged as a dominant paradigm for end-to-end autonomous driving (AD). However, RL suffers from sample inefficiency and a lack of semantic interpretability in complex scenarios. Foundation Models, particularly Vision-Language Models (VLMs), can mitigate this by offering rich, context-aware knowledge, yet their high inference latency hinders deployment in high-frequency RL training loops. To bridge this gap, we present Found-RL, a platform tailored to efficiently enhance RL for AD using foundation models. A core innovation is the asynchronous batch inference framework, which decouples heavy VLM reasoning from the simulation loop, effectively resolving latency bottlenecks to support real-time learning. We introduce diverse supervision mechanisms: Value-Margin Regularization (VMR) and Advantage-Weighted Action Guidance (AWAG) to effectively distill expert-like VLM action suggestions into the RL policy. Additionally, we adopt high-throughput CLIP for dense reward shaping. We address CLIP's dynamic blindness via Conditional Contrastive Action Alignment, which conditions prompts on discretized speed/command and yields a normalized, margin-based bonus from context-specific action-anchor scoring. Found-RL provides an end-to-end pipeline for fine-tuned VLM integration and shows that a lightweight RL model can achieve near-VLM performance compared with billion-parameter VLMs while sustaining real-time inference (approx. 500 FPS). Code, data, and models will be publicly available at https://github.com/ys-qu/found-rl.