🤖 AI Summary
Existing video generation tools struggle to support multi-stage, multimodal, and decision-intensive workflows that translate complex ideation into video content, largely due to their inability to manage and reuse exploration trajectories. This work proposes T2VTree, a user-centered visual analytics approach that, for the first time, integrates tree-structured representation with editable agent-based planning. It models the creative process as an intent-output tree—traceable, branchable, and reusable—where each node encapsulates user intent and associated multimodal outputs, and collaborative agents generate editable execution plans. The system incorporates tree-based visualization, multi-agent coordination, in-situ previewing, and clip stitching. Case studies and user experiments demonstrate that T2VTree significantly enhances controllability, efficiency, and overall user experience in complex video generation tasks.
📝 Abstract
Generative models have substantially expanded video generation capabilities, yet practical thought-to-video creation remains a multi-stage, multi-modal, and decision-intensive process. However, existing tools either hide intermediate decisions behind repeated reruns or expose operator-level workflows that make exploration traces difficult to manage, compare, and reuse. We present T2VTree, a user-centered visual analytics approach for agent-assisted thought-to-video authoring. T2VTree represents the authoring process as a tree visualization. Each node in the tree binds an editable specification (intent, referenced inputs, workflow choice, prompts, and parameters) with the resulting multimodal outputs, making refinement, branching, and provenance inspection directly operable. To reduce the burden of deciding what to do next, a set of collaborating agents translates step-level intent into an executable plan that remains visible and user-editable before execution. We further implement a visual analytics system that integrates branching authoring with in-place preview and stitching for convergent assembly, enabling end-to-end multi-scene creation without leaving the authoring context. We demonstrate T2VTreeVA through two multi-scene case studies and a comparative user study, showing how the T2VTree visualization and editable agent planning support reliable refinement, localized comparison, and practical reuse in real authoring workflows. T2VTree is available at: https://github.com/tezuka0210/T2VTree.