JAM-Flow: Joint Audio-Motion Synthesis with Flow Matching

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the modality disconnection between speech and facial motion in talking-face generation. We propose the first unified generative framework supporting heterogeneous inputs—text, audio, or motion references. Methodologically, we introduce a Multimodal Diffusion Transformer (MMDiT) featuring selective cross-modal attention and time-aligned positional encoding, enabling efficient inter-modal interaction while preserving modality-specific characteristics under the flow-matching paradigm. Additionally, we design a dual-branch architecture—Motion-DiT and Audio-DiT—augmented with localized joint-attention masking and a reconstruction-oriented training objective. Experiments demonstrate that our method consistently produces high-fidelity, strictly temporally aligned audiovisual sequences across diverse input conditions, significantly improving multimodal synchronization fidelity. It achieves state-of-the-art performance on benchmark talking-face synthesis tasks.

Technology Category

Application Category

📝 Abstract
The intrinsic link between facial motion and speech is often overlooked in generative modeling, where talking head synthesis and text-to-speech (TTS) are typically addressed as separate tasks. This paper introduces JAM-Flow, a unified framework to simultaneously synthesize and condition on both facial motion and speech. Our approach leverages flow matching and a novel Multi-Modal Diffusion Transformer (MM-DiT) architecture, integrating specialized Motion-DiT and Audio-DiT modules. These are coupled via selective joint attention layers and incorporate key architectural choices, such as temporally aligned positional embeddings and localized joint attention masking, to enable effective cross-modal interaction while preserving modality-specific strengths. Trained with an inpainting-style objective, JAM-Flow supports a wide array of conditioning inputs-including text, reference audio, and reference motion-facilitating tasks such as synchronized talking head generation from text, audio-driven animation, and much more, within a single, coherent model. JAM-Flow significantly advances multi-modal generative modeling by providing a practical solution for holistic audio-visual synthesis. project page: https://joonghyuk.com/jamflow-web
Problem

Research questions and friction points this paper is trying to address.

Joint synthesis of facial motion and speech
Unified framework for multi-modal generative modeling
Effective cross-modal interaction preserving modality strengths
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified audio-motion synthesis with flow matching
Multi-Modal Diffusion Transformer for cross-modal interaction
Inpainting-style training for diverse conditioning inputs
🔎 Similar Papers
No similar papers found.