🤖 AI Summary
Current text-to-video (T2V) diffusion models rely on fixed, pre-trained text encoders, which struggle with short, unstructured prompts—leading to insufficient semantic understanding and misalignment with user intent. To address this, we propose RISE-T2V, a unified framework integrating prompt rewriting and semantic extraction. It introduces a lightweight rewriting adapter that implicitly leverages hidden states from large language models (LLMs)—specifically those derived from next-token prediction—as dynamic conditioning signals for video generation, enabling context-aware prompt expansion and semantic enrichment. The framework further incorporates hidden-state injection, cross-modal feature alignment, and end-to-end optimization with the video diffusion model. Experiments demonstrate that RISE-T2V significantly improves both visual quality and intent fidelity across diverse T2V architectures, exhibiting model-agnostic design and strong generalization capability.
📝 Abstract
Most text-to-video(T2V) diffusion models depend on pre-trained text encoders for semantic alignment, yet they often fail to maintain video quality when provided with concise prompts rather than well-designed ones. The primary issue lies in their limited textual semantics understanding. Moreover, these text encoders cannot rephrase prompts online to better align with user intentions, which limits both the scalability and usability of the models, To address these challenges, we introduce RISE-T2V, which uniquely integrates the processes of prompt rephrasing and semantic feature extraction into a single and seamless step instead of two separate steps. RISE-T2V is universal and can be applied to various pre-trained LLMs and video diffusion models(VDMs), significantly enhancing their capabilities for T2V tasks. We propose an innovative module called the Rephrasing Adapter, enabling diffusion models to utilize text hidden states during the next token prediction of the LLM as a condition for video generation. By employing a Rephrasing Adapter, the video generation model can implicitly rephrase basic prompts into more comprehensive representations that better match the user's intent. Furthermore, we leverage the powerful capabilities of LLMs to enable video generation models to accomplish a broader range of T2V tasks. Extensive experiments demonstrate that RISE-T2V is a versatile framework applicable to different video diffusion model architectures, significantly enhancing the ability of T2V models to generate high-quality videos that align with user intent. Visual results are available on the webpage at https://rise-t2v.github.io.