ColaVLA: Leveraging Cognitive Latent Reasoning for Hierarchical Parallel Trajectory Planning in Autonomous Driving

📅 2025-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address key bottlenecks in autonomous driving—namely, text-action misalignment, high autoregressive latency, and weak causal reasoning under multimodal inputs—this paper proposes a unified Vision-Language-Action (VLA) framework for trajectory generation. Our method introduces three core innovations: (1) a novel Cognitive Implicit Reasoner that compresses textual reasoning into decision-oriented meta-action embeddings; (2) a non-autoregressive, hierarchical parallel decoder that generates multi-scale trajectories in a single forward pass, balancing generalizability and real-time performance; and (3) ego-adaptive scene understanding integrated with causal constraint optimization to ensure trajectory safety and causal consistency. Evaluated on nuScenes, our approach achieves state-of-the-art (SOTA) performance in both open-loop and closed-loop settings, reduces inference latency by 76%, significantly enhances robustness, and enables onboard real-time deployment.

Technology Category

Application Category

📝 Abstract
Autonomous driving requires generating safe and reliable trajectories from complex multimodal inputs. Traditional modular pipelines separate perception, prediction, and planning, while recent end-to-end (E2E) systems learn them jointly. Vision-language models (VLMs) further enrich this paradigm by introducing cross-modal priors and commonsense reasoning, yet current VLM-based planners face three key challenges: (i) a mismatch between discrete text reasoning and continuous control, (ii) high latency from autoregressive chain-of-thought decoding, and (iii) inefficient or non-causal planners that limit real-time deployment. We propose ColaVLA, a unified vision-language-action framework that transfers reasoning from text to a unified latent space and couples it with a hierarchical, parallel trajectory decoder. The Cognitive Latent Reasoner compresses scene understanding into compact, decision-oriented meta-action embeddings through ego-adaptive selection and only two VLM forward passes. The Hierarchical Parallel Planner then generates multi-scale, causality-consistent trajectories in a single forward pass. Together, these components preserve the generalization and interpretability of VLMs while enabling efficient, accurate and safe trajectory generation. Experiments on the nuScenes benchmark show that ColaVLA achieves state-of-the-art performance in both open-loop and closed-loop settings with favorable efficiency and robustness.
Problem

Research questions and friction points this paper is trying to address.

Bridges discrete text reasoning with continuous control tasks
Reduces latency from autoregressive chain-of-thought decoding
Enables efficient, causal real-time trajectory planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cognitive latent reasoning transfers text reasoning to latent space
Hierarchical parallel planner generates multi-scale trajectories in one pass
Ego-adaptive selection compresses scene understanding into meta-action embeddings
🔎 Similar Papers
No similar papers found.
Q
Qihang Peng
Tsinghua University
X
Xuesong Chen
CUHK MMLab
C
Chenye Yang
Tsinghua University
Shaoshuai Shi
Shaoshuai Shi
Didi Chuxing, Max Planck Institute for Informatics
Computer VisionDeep LearningAutonomous Driving
H
Hongsheng Li
CUHK MMLab