FlowAct-R1: Towards Interactive Humanoid Video Generation

📅 2026-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the tension between high-fidelity human video generation and real-time interactivity by proposing a streaming video synthesis method based on the MMDiT architecture. By introducing block-wise diffusion with both forced and self-consistency mechanisms, the approach effectively mitigates error accumulation over long sequences, ensuring strong temporal coherence. Combined with efficient distillation and system-level optimizations, the method achieves a first-frame latency of 1.5 seconds and real-time performance at 25 fps in 480p resolution. It supports arbitrarily long sequences, fine-grained full-body pose control, and seamless transitions between natural behaviors, delivering high perceptual realism, vivid motion dynamics, and robust generalization across diverse character styles.

Technology Category

Application Category

📝 Abstract
Interactive humanoid video generation aims to synthesize lifelike visual agents that can engage with humans through continuous and responsive video. Despite recent advances in video synthesis, existing methods often grapple with the trade-off between high-fidelity synthesis and real-time interaction requirements. In this paper, we propose FlowAct-R1, a framework specifically designed for real-time interactive humanoid video generation. Built upon a MMDiT architecture, FlowAct-R1 enables the streaming synthesis of video with arbitrary durations while maintaining low-latency responsiveness. We introduce a chunkwise diffusion forcing strategy, complemented by a novel self-forcing variant, to alleviate error accumulation and ensure long-term temporal consistency during continuous interaction. By leveraging efficient distillation and system-level optimizations, our framework achieves a stable 25fps at 480p resolution with a time-to-first-frame (TTFF) of only around 1.5 seconds. The proposed method provides holistic and fine-grained full-body control, enabling the agent to transition naturally between diverse behavioral states in interactive scenarios. Experimental results demonstrate that FlowAct-R1 achieves exceptional behavioral vividness and perceptual realism, while maintaining robust generalization across diverse character styles.
Problem

Research questions and friction points this paper is trying to address.

interactive humanoid video generation
real-time interaction
high-fidelity synthesis
temporal consistency
responsive video
Innovation

Methods, ideas, or system contributions that make the work stand out.

interactive humanoid video generation
chunkwise diffusion forcing
self-forcing
real-time synthesis
temporal consistency
🔎 Similar Papers
No similar papers found.