Improving AI-generated music with user-guided training

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI-based music generation faces a fundamental challenge in modeling users’ subjective musical preferences, as conventional static dataset training paradigms lack adaptability to individual tastes. To address this, we propose a human-in-the-loop, feedback-driven fine-tuning framework: user-provided real-time ratings are aggregated to construct a personalized loss function; genetic algorithms then dynamically optimize model parameters; and the framework supports cross-modal (spectrogram-to-audio) adaptation for both diffusion and autoregressive models. Crucially, this approach breaks the subjectivity bottleneck by directly transforming sparse, ordinal user ratings into differentiable optimization signals—a first in the field. Pilot experiments demonstrate rapid convergence: after only two iterative refinement rounds, average user ratings increase by +0.59 points (baseline → +0.2 → +0.39), robustly validating the efficacy of closed-loop preference modeling.

Technology Category

Application Category

📝 Abstract
AI music generation has advanced rapidly, with models like diffusion and autoregressive algorithms enabling high-fidelity outputs. These tools can alter styles, mix instruments, or isolate them. Since sound can be visualized as spectrograms, image-generation algorithms can be applied to generate novel music. However, these algorithms are typically trained on fixed datasets, which makes it challenging for them to interpret and respond to user input accurately. This is especially problematic because music is highly subjective and requires a level of personalization that image generation does not provide. In this work, we propose a human-computation approach to gradually improve the performance of these algorithms based on user interactions. The human-computation element involves aggregating and selecting user ratings to use as the loss function for fine-tuning the model. We employ a genetic algorithm that incorporates user feedback to enhance the baseline performance of a model initially trained on a fixed dataset. The effectiveness of this approach is measured by the average increase in user ratings with each iteration. In the pilot test, the first iteration showed an average rating increase of 0.2 compared to the baseline. The second iteration further improved upon this, achieving an additional increase of 0.39 over the first iteration.
Problem

Research questions and friction points this paper is trying to address.

AI music generation lacks user personalization in fixed datasets
User-guided training improves music generation via feedback integration
Genetic algorithm enhances model performance using human ratings
Innovation

Methods, ideas, or system contributions that make the work stand out.

User-guided training for AI music generation
Genetic algorithm incorporating user feedback
Human-computation approach for model fine-tuning
🔎 Similar Papers
No similar papers found.
V
Vishwa Mohan Singh
Institut f ¨ur Statistik, Ludwig-Maximilians-Universit ¨at M ¨unchen; Institut f ¨ur Informatik, Ludwig-Maximilians-Universit ¨at M ¨unchen
S
Sai Anirudh Aryasomayajula
Institut f ¨ur Statistik, Ludwig-Maximilians-Universit ¨at M ¨unchen; Institut f ¨ur Informatik, Ludwig-Maximilians-Universit ¨at M ¨unchen
A
Ahan Chatterjee
Institut f ¨ur Statistik, Ludwig-Maximilians-Universit ¨at M ¨unchen; Institut f ¨ur Informatik, Ludwig-Maximilians-Universit ¨at M ¨unchen
B
Beste Aydemir
Institut f ¨ur Statistik, Ludwig-Maximilians-Universit ¨at M ¨unchen; Institut f ¨ur Informatik, Ludwig-Maximilians-Universit ¨at M ¨unchen
Rifat Mehreen Amin
Rifat Mehreen Amin
PhD student in HCI at Ludwig Maximilian University
Human Computer InteractionCreative toolsExplainability in AIInfoVis