Do LLM Modules Generalize? A Study on Motion Generation for Autonomous Driving

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates the transferability of large language model (LLM) components to autonomous driving trajectory generation. Addressing the core question—*which LLM modules can be effectively adapted for motion generation?*—we conduct the first comprehensive empirical evaluation across five key modules: tokenizer design, positional embeddings, pretraining paradigms, post-training strategies, and test-time computation, benchmarked on Waymo Sim Agents. We propose a lightweight adaptation framework grounded in autoregressive sequence modeling and context-aware decision-making, and uncover intrinsic mechanisms governing module transferability—e.g., local geometric sensitivity constraining positional encoding efficacy. Experiments demonstrate that carefully tailored LLM modules significantly improve trajectory prediction performance, achieving state-of-the-art competitiveness on Sim Agents. Our work establishes a reproducible methodology and theoretical foundation for cross-domain transfer of language models in multimodal embodied intelligence.

Technology Category

Application Category

📝 Abstract
Recent breakthroughs in large language models (LLMs) have not only advanced natural language processing but also inspired their application in domains with structurally similar problems--most notably, autonomous driving motion generation. Both domains involve autoregressive sequence modeling, token-based representations, and context-aware decision making, making the transfer of LLM components a natural and increasingly common practice. However, despite promising early attempts, a systematic understanding of which LLM modules are truly transferable remains lacking. In this paper, we present a comprehensive evaluation of five key LLM modules--tokenizer design, positional embedding, pre-training paradigms, post-training strategies, and test-time computation--within the context of motion generation for autonomous driving. Through extensive experiments on the Waymo Sim Agents benchmark, we demonstrate that, when appropriately adapted, these modules can significantly improve performance for autonomous driving motion generation. In addition, we identify which techniques can be effectively transferred, analyze the potential reasons for the failure of others, and discuss the specific adaptations needed for autonomous driving scenarios. We evaluate our method on the Sim Agents task and achieve competitive results.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM module transferability for autonomous driving motion generation
Identifying effective techniques and analyzing failure reasons in transfer
Assessing specific adaptations needed for autonomous driving scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapted LLM tokenizer for motion representation
Transferred positional embedding to driving context
Applied post-training strategies for autonomous scenarios
🔎 Similar Papers
No similar papers found.
M
Mingyi Wang
Autolab, Westlake University
J
Jingke Wang
UDEER.AI
T
Tengju Ye
UDEER.AI
J
Junbo Chen
UDEER.AI
Kaicheng Yu
Kaicheng Yu
Assistant Professor, Westlake University, PI of Autonomous Intelligence Lab
computer vision3D understandingautonomous perceptionautomatic machine learning