🤖 AI Summary
This work addresses the limitations of existing music captioning approaches that rely on large language models (LLMs) to synthesize training data from metadata, often resulting in stylistic rigidity and entanglement between factual content and linguistic style. To overcome this, the authors propose a decoupled two-stage framework: first predicting fine-grained metadata directly from audio, then leveraging a pretrained LLM at inference time to convert this metadata into natural language descriptions. This design circumvents the stylistic constraints imposed by end-to-end training, enables flexible post-hoc control over descriptive style, and supports metadata completion by integrating audio with partial metadata inputs. Experimental results demonstrate that the method achieves performance comparable to strong baselines while significantly reducing training time, and effectively facilitates both style customization and metadata imputation.
📝 Abstract
Music captioning, or the task of generating a natural language description of music, is useful for both music understanding and controllable music generation. Training captioning models, however, typically requires high-quality music caption data which is scarce compared to metadata (e.g., genre, mood, etc.). As a result, it is common to use large language models (LLMs) to synthesize captions from metadata to generate training data for captioning models, though this process imposes a fixed stylization and entangles factual information with natural language style. As a more direct approach, we propose metadata-based captioning. We train a metadata prediction model to infer detailed music metadata from audio and then convert it into expressive captions via pre-trained LLMs at inference time. Compared to a strong end-to-end baseline trained on LLM-generated captions derived from metadata, our method: (1) achieves comparable performance in less training time over end-to-end captioners, (2) offers flexibility to easily change stylization post-training, enabling output captions to be tailored to specific stylistic and quality requirements, and (3) can be prompted with audio and partial metadata to enable powerful metadata imputation or in-filling--a common task for organizing music data.