🤖 AI Summary
Existing autoregressive music generation models rely on token-by-token prediction, diverging from human composers’ structural reasoning and consequently limiting musicality and coherence. This paper introduces MusiCoT—the first Chain-of-Thought (CoT) prompting framework tailored for music generation—guiding models to first plan global structure (e.g., sections, key, instrumentation) before generating audio tokens. Our method features: (1) a music-domain-specific structured CoT paradigm; (2) zero-shot, label-free structural interpretability analysis and variable-length style referencing via CLAP; and (3) end-to-end autoregressive audio token generation. Experiments demonstrate that MusiCoT matches state-of-the-art fidelity while significantly mitigating repetitive generation. Human evaluation confirms substantial improvements in both musicality and structural coherence.
📝 Abstract
Autoregressive (AR) models have demonstrated impressive capabilities in generating high-fidelity music. However, the conventional next-token prediction paradigm in AR models does not align with the human creative process in music composition, potentially compromising the musicality of generated samples. To overcome this limitation, we introduce MusiCoT, a novel chain-of-thought (CoT) prompting technique tailored for music generation. MusiCoT empowers the AR model to first outline an overall music structure before generating audio tokens, thereby enhancing the coherence and creativity of the resulting compositions. By leveraging the contrastive language-audio pretraining (CLAP) model, we establish a chain of"musical thoughts", making MusiCoT scalable and independent of human-labeled data, in contrast to conventional CoT methods. Moreover, MusiCoT allows for in-depth analysis of music structure, such as instrumental arrangements, and supports music referencing -- accepting variable-length audio inputs as optional style references. This innovative approach effectively addresses copying issues, positioning MusiCoT as a vital practical method for music prompting. Our experimental results indicate that MusiCoT consistently achieves superior performance across both objective and subjective metrics, producing music quality that rivals state-of-the-art generation models. Our samples are available at https://MusiCoT.github.io/.