LangDriveCTRL: Natural Language Controllable Driving Scene Editing with Multi-modal Agents

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of controllable video synthesis for autonomous driving scenarios. Methodologically, it introduces the first natural-language-driven fine-grained editing framework for driving videos, built upon a 3D scene graph that disentangles static backgrounds from dynamic objects. A multi-agent collaborative editing architecture—comprising Orchestrator, Grounding, Behavior Editing, and Reviewer agents—is proposed to jointly control object instantiation and multi-vehicle behavioral semantics via a single text instruction. Crucially, a behavior-review iterative mechanism ensures traffic-plausible motion generation. Contributions include: (i) the first driving-specific multimodal agent-based editing paradigm; (ii) tight integration of semantic understanding, behavior planning, and diffusion-based rendering; and (iii) state-of-the-art performance—achieving twice the instruction alignment accuracy of prior methods—while significantly improving structural fidelity, photorealism, and traffic-semantic plausibility.

Technology Category

Application Category

📝 Abstract
LangDriveCTRL is a natural-language-controllable framework for editing real-world driving videos to synthesize diverse traffic scenarios. It leverages explicit 3D scene decomposition to represent driving videos as a scene graph, containing static background and dynamic objects. To enable fine-grained editing and realism, it incorporates an agentic pipeline in which an Orchestrator transforms user instructions into execution graphs that coordinate specialized agents and tools. Specifically, an Object Grounding Agent establishes correspondence between free-form text descriptions and target object nodes in the scene graph; a Behavior Editing Agent generates multi-object trajectories from language instructions; and a Behavior Reviewer Agent iteratively reviews and refines the generated trajectories. The edited scene graph is rendered and then refined using a video diffusion tool to address artifacts introduced by object insertion and significant view changes. LangDriveCTRL supports both object node editing (removal, insertion and replacement) and multi-object behavior editing from a single natural-language instruction. Quantitatively, it achieves nearly $2 imes$ higher instruction alignment than the previous SoTA, with superior structural preservation, photorealism, and traffic realism. Project page is available at: https://yunhe24.github.io/langdrivectrl/.
Problem

Research questions and friction points this paper is trying to address.

Enables natural language control for editing real-world driving videos.
Supports object editing and multi-object behavior editing via language instructions.
Enhances instruction alignment, structural preservation, and photorealism in synthesized scenarios.
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D scene decomposition for driving video representation
Agentic pipeline with specialized agents for fine-grained editing
Video diffusion tool for rendering and artifact refinement