MIDAS: Multimodal Interactive Digital-human Synthesis via Real-time Autoregressive Video Generation

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing digital human video generation systems suffer from high latency, substantial computational overhead, and limited multimodal controllability. To address these challenges, this work proposes a low-latency, highly controllable streaming multimodal video generation framework. Methodologically, it integrates autoregressive modeling with diffusion-based architecture, introduces a large language model–enhanced multimodal conditional encoding mechanism, constructs a large-scale dialogue dataset, and employs a deep compressed autoencoder to achieve 64× feature compression—thereby alleviating long-sequence inference burden. Experiments demonstrate that the framework significantly improves responsiveness (end-to-end latency <200 ms) and motion accuracy across duplex dialogue, multilingual synthesis, and interactive world-modeling tasks. It supports joint audio–pose–text driving and fine-grained temporal control, effectively balancing efficiency, visual quality, and real-time interactivity.

Technology Category

Application Category

📝 Abstract
Recently, interactive digital human video generation has attracted widespread attention and achieved remarkable progress. However, building such a practical system that can interact with diverse input signals in real time remains challenging to existing methods, which often struggle with high latency, heavy computational cost, and limited controllability. In this work, we introduce an autoregressive video generation framework that enables interactive multimodal control and low-latency extrapolation in a streaming manner. With minimal modifications to a standard large language model (LLM), our framework accepts multimodal condition encodings including audio, pose, and text, and outputs spatially and semantically coherent representations to guide the denoising process of a diffusion head. To support this, we construct a large-scale dialogue dataset of approximately 20,000 hours from multiple sources, providing rich conversational scenarios for training. We further introduce a deep compression autoencoder with up to 64$ imes$ reduction ratio, which effectively alleviates the long-horizon inference burden of the autoregressive model. Extensive experiments on duplex conversation, multilingual human synthesis, and interactive world model highlight the advantages of our approach in low latency, high efficiency, and fine-grained multimodal controllability.
Problem

Research questions and friction points this paper is trying to address.

Real-time interactive digital human video generation
High latency and computational cost reduction
Multimodal input control and coherence maintenance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autoregressive video generation framework
Multimodal condition encodings integration
Deep compression autoencoder reduction
🔎 Similar Papers
No similar papers found.
M
Ming Chen
Kling Team, Kuaishou Technology
L
Liyuan Cui
Kling Team, Kuaishou Technology; Zhejiang University
W
Wenyuan Zhang
Kling Team, Kuaishou Technology; Tsinghua University
H
Haoxian Zhang
Kling Team, Kuaishou Technology
Y
Yan Zhou
Kling Team, Kuaishou Technology
Xiaohan Li
Xiaohan Li
Walmart Inc.
Data MiningRecommender systemMedical AI
X
Xiaoqiang Liu
Kling Team, Kuaishou Technology
Pengfei Wan
Pengfei Wan
Head of Kling Video Generation Models, Kuaishou Technology
Generative ModelsComputer VisionMultimodal AIComputer Graphics