Plan-X: Instruct Video Generation via Semantic Planning

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion Transformers often suffer from visual hallucinations and poor instruction alignment in complex video generation—particularly for high-level semantic tasks involving human-object interaction, multi-stage actions, and contextual motion reasoning. To address these limitations, we propose Plan-X, the first framework to introduce a learnable multimodal semantic planning mechanism. Plan-X employs a multimodal large language model as a semantic planner that jointly reasons over textual and visual context to infer user intent and autoregressively generate spatiotemporal semantic tokens. These tokens serve as structured priors that explicitly guide the diffusion Transformer toward high-fidelity video synthesis. Experiments demonstrate that Plan-X significantly suppresses visual hallucinations, improves instruction adherence and spatiotemporal consistency, and achieves state-of-the-art performance on complex scene understanding and long-horizon action modeling tasks.

Technology Category

Application Category

📝 Abstract
Diffusion Transformers have demonstrated remarkable capabilities in visual synthesis, yet they often struggle with high-level semantic reasoning and long-horizon planning. This limitation frequently leads to visual hallucinations and mis-alignments with user instructions, especially in scenarios involving complex scene understanding, human-object interactions, multi-stage actions, and in-context motion reasoning. To address these challenges, we propose Plan-X, a framework that explicitly enforces high-level semantic planning to instruct video generation process. At its core lies a Semantic Planner, a learnable multimodal language model that reasons over the user's intent from both text prompts and visual context, and autoregressively generates a sequence of text-grounded spatio-temporal semantic tokens. These semantic tokens, complementary to high-level text prompt guidance, serve as structured "semantic sketches" over time for the video diffusion model, which has its strength at synthesizing high-fidelity visual details. Plan-X effectively integrates the strength of language models in multimodal in-context reasoning and planning, together with the strength of diffusion models in photorealistic video synthesis. Extensive experiments demonstrate that our framework substantially reduces visual hallucinations and enables fine-grained, instruction-aligned video generation consistent with multimodal context.
Problem

Research questions and friction points this paper is trying to address.

Address visual hallucinations in video generation models
Improve alignment with complex user instructions
Enhance semantic reasoning for multi-stage actions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic Planner generates text-grounded spatio-temporal tokens
Framework integrates language models with diffusion transformers
Structured semantic sketches guide video synthesis process
🔎 Similar Papers
No similar papers found.