Auto-Regressive vs Flow-Matching: a Comparative Study of Modeling Paradigms for Text-to-Music Generation

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The text-to-music generation field suffers from confounded design factors—diverse modeling paradigms, heterogeneous datasets, and inconsistent architectures—hindering principled analysis of key components. This work presents the first controlled, apples-to-apples comparison of autoregressive (AR) and conditional flow matching (CFM) paradigms under identical data, training configurations, and backbone architecture. By rigorously isolating variables, we systematically evaluate trade-offs across generation speed, long-horizon temporal coherence, audio inpainting capability, fine-grained text alignment, local editability, and robustness. Results show CFM excels in inference efficiency, global structure preservation, and reconstruction fidelity, whereas AR achieves superior token-level alignment, precise local editing, and resilience to input perturbations. Our study establishes a foundational empirical understanding of the inherent paradigm trade-offs, directly informing model selection and system design. To foster reproducibility, we publicly release benchmark code, a standardized evaluation framework, and high-fidelity audio samples.

Technology Category

Application Category

📝 Abstract
Recent progress in text-to-music generation has enabled models to synthesize high-quality musical segments, full compositions, and even respond to fine-grained control signals, e.g. chord progressions. State-of-the-art (SOTA) systems differ significantly across many dimensions, such as training datasets, modeling paradigms, and architectural choices. This diversity complicates efforts to evaluate models fairly and pinpoint which design choices most influence performance. While factors like data and architecture are important, in this study we focus exclusively on the modeling paradigm. We conduct a systematic empirical analysis to isolate its effects, offering insights into associated trade-offs and emergent behaviors that can guide future text-to-music generation systems. Specifically, we compare the two arguably most common modeling paradigms: Auto-Regressive decoding and Conditional Flow-Matching. We conduct a controlled comparison by training all models from scratch using identical datasets, training configurations, and similar backbone architectures. Performance is evaluated across multiple axes, including generation quality, robustness to inference configurations, scalability, adherence to both textual and temporally aligned conditioning, and editing capabilities in the form of audio inpainting. This comparative study sheds light on distinct strengths and limitations of each paradigm, providing actionable insights that can inform future architectural and training decisions in the evolving landscape of text-to-music generation. Audio sampled examples are available at: https://huggingface.co/spaces/ortal1602/ARvsFM
Problem

Research questions and friction points this paper is trying to address.

Compare Auto-Regressive and Flow-Matching for text-to-music generation
Evaluate modeling paradigms' impact on performance and trade-offs
Assess generation quality, scalability, and conditioning adherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comparative study of Auto-Regressive and Flow-Matching paradigms
Identical datasets and training configurations for fair comparison
Evaluation across quality, robustness, scalability, and conditioning
🔎 Similar Papers
No similar papers found.