🤖 AI Summary
Existing digital human video generation systems suffer from high latency, substantial computational overhead, and limited multimodal controllability. To address these challenges, this work proposes a low-latency, highly controllable streaming multimodal video generation framework. Methodologically, it integrates autoregressive modeling with diffusion-based architecture, introduces a large language model–enhanced multimodal conditional encoding mechanism, constructs a large-scale dialogue dataset, and employs a deep compressed autoencoder to achieve 64× feature compression—thereby alleviating long-sequence inference burden. Experiments demonstrate that the framework significantly improves responsiveness (end-to-end latency <200 ms) and motion accuracy across duplex dialogue, multilingual synthesis, and interactive world-modeling tasks. It supports joint audio–pose–text driving and fine-grained temporal control, effectively balancing efficiency, visual quality, and real-time interactivity.
📝 Abstract
Recently, interactive digital human video generation has attracted widespread attention and achieved remarkable progress. However, building such a practical system that can interact with diverse input signals in real time remains challenging to existing methods, which often struggle with high latency, heavy computational cost, and limited controllability. In this work, we introduce an autoregressive video generation framework that enables interactive multimodal control and low-latency extrapolation in a streaming manner. With minimal modifications to a standard large language model (LLM), our framework accepts multimodal condition encodings including audio, pose, and text, and outputs spatially and semantically coherent representations to guide the denoising process of a diffusion head. To support this, we construct a large-scale dialogue dataset of approximately 20,000 hours from multiple sources, providing rich conversational scenarios for training. We further introduce a deep compression autoencoder with up to 64$ imes$ reduction ratio, which effectively alleviates the long-horizon inference burden of the autoregressive model. Extensive experiments on duplex conversation, multilingual human synthesis, and interactive world model highlight the advantages of our approach in low latency, high efficiency, and fine-grained multimodal controllability.