T2VTree: User-Centered Visual Analytics for Agent-Assisted Thought-to-Video Authoring

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video generation tools struggle to support multi-stage, multimodal, and decision-intensive workflows that translate complex ideation into video content, largely due to their inability to manage and reuse exploration trajectories. This work proposes T2VTree, a user-centered visual analytics approach that, for the first time, integrates tree-structured representation with editable agent-based planning. It models the creative process as an intent-output tree—traceable, branchable, and reusable—where each node encapsulates user intent and associated multimodal outputs, and collaborative agents generate editable execution plans. The system incorporates tree-based visualization, multi-agent coordination, in-situ previewing, and clip stitching. Case studies and user experiments demonstrate that T2VTree significantly enhances controllability, efficiency, and overall user experience in complex video generation tasks.

Technology Category

Application Category

📝 Abstract
Generative models have substantially expanded video generation capabilities, yet practical thought-to-video creation remains a multi-stage, multi-modal, and decision-intensive process. However, existing tools either hide intermediate decisions behind repeated reruns or expose operator-level workflows that make exploration traces difficult to manage, compare, and reuse. We present T2VTree, a user-centered visual analytics approach for agent-assisted thought-to-video authoring. T2VTree represents the authoring process as a tree visualization. Each node in the tree binds an editable specification (intent, referenced inputs, workflow choice, prompts, and parameters) with the resulting multimodal outputs, making refinement, branching, and provenance inspection directly operable. To reduce the burden of deciding what to do next, a set of collaborating agents translates step-level intent into an executable plan that remains visible and user-editable before execution. We further implement a visual analytics system that integrates branching authoring with in-place preview and stitching for convergent assembly, enabling end-to-end multi-scene creation without leaving the authoring context. We demonstrate T2VTreeVA through two multi-scene case studies and a comparative user study, showing how the T2VTree visualization and editable agent planning support reliable refinement, localized comparison, and practical reuse in real authoring workflows. T2VTree is available at: https://github.com/tezuka0210/T2VTree.
Problem

Research questions and friction points this paper is trying to address.

thought-to-video
visual analytics
agent-assisted authoring
multi-modal generation
workflow management
Innovation

Methods, ideas, or system contributions that make the work stand out.

visual analytics
agent-assisted authoring
thought-to-video
tree-based workflow
editable planning
Z
Zhuoyun Zheng
Computer Network Information Center, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
Yu Dong
Yu Dong
Computer Network Information Center, Chinese Academy of Sciences
Visual AnalyticsHuman-Computer Interaction
G
Gaorong Liang
Computer Network Information Center, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
G
Guan Li
Computer Network Information Center, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
G
Guihua Shan
Computer Network Information Center, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China; Hangzhou Institute for Advanced Study, UCAS, Hangzhou, China
Shiyu Cheng
Shiyu Cheng
PhD student, Washington University in St. Louis
Dong Tian
Dong Tian
Research Scientist, InterDigital
3D Video ProcessingPoint Cloud ProcessingDeep LearningCompression
Jianlong Zhou
Jianlong Zhou
University of Technology Sydney (UTS)
AI EthicsAI FairnessAI ExplainabilityHuman Centred AIHuman Computer Interaction
Jie Liang
Jie Liang
Australian National University
Computer VisionHyperspectral Imaging