Discovering Interpretable Concepts in Large Generative Music Models

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the implicit music-theoretic structures encoded in large generative music models and compares them with established music theory. Methodologically, we propose an interpretability framework based on sparse autoencoders (SAEs) to extract high-level semantic neural features from the residual stream of Transformer-based models, coupled with an automated annotation and semantic evaluation pipeline. For the first time in generative music models, we systematically identify two classes of interpretable concepts: one strictly aligning with conventional music-theoretic categories—such as chord progressions—and another comprising statistically robust, functionally coherent musical patterns lacking natural-language descriptions. We isolate数百 interpretable features, substantially enhancing model transparency and uncovering deep organizational principles overlooked by traditional analysis. These findings provide novel empirical grounding for computational models of music cognition.

Technology Category

Application Category

📝 Abstract
The fidelity with which neural networks can now generate content such as music presents a scientific opportunity: these systems appear to have learned implicit theories of the structure of such content through statistical learning alone. This could offer a novel lens on theories of human-generated media. Where these representations align with traditional constructs (e.g. chord progressions in music), they demonstrate how these can be inferred from statistical regularities. Where they diverge, they highlight potential limits in our theoretical frameworks -- patterns that we may have overlooked but that nonetheless hold significant explanatory power. In this paper, we focus on the specific case of music generators. We introduce a method to discover musical concepts using sparse autoencoders (SAEs), extracting interpretable features from the residual stream activations of a transformer model. We evaluate this approach by extracting a large set of features and producing an automatic labeling and evaluation pipeline for them. Our results reveal both familiar musical concepts and counterintuitive patterns that lack clear counterparts in existing theories or natural language altogether. Beyond improving model transparency, our work provides a new empirical tool that might help discover organizing principles in ways that have eluded traditional methods of analysis and synthesis.
Problem

Research questions and friction points this paper is trying to address.

Discover interpretable musical concepts in generative models
Evaluate alignment with traditional music theory constructs
Identify novel patterns beyond existing theoretical frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses sparse autoencoders for concept discovery
Extracts features from transformer activations
Automates labeling and evaluation pipeline
🔎 Similar Papers
No similar papers found.