Neural Proxies for Sound Synthesizers: Learning Perceptually Informed Preset Representations

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Non-differentiability of software synthesizers impedes their integration into end-to-end neural audio generation frameworks, hindering automatic synthesizer programming (ASP) and sound matching. Method: We propose a differentiable neural surrogate model that learns to map synthesizer presets into a perceptually consistent audio embedding space, using embedding-space distance as a differentiable proxy loss. We systematically evaluate multiple pre-trained audio encoders (CLAP, VGGish) and neural architectures (feed-forward networks, RNNs, Transformers). Contribution/Results: Experiments across three mainstream synthesizers demonstrate that the learned preset representations are compact, perceptually meaningful, and generalize across synthesizers. The surrogate model significantly improves sound matching accuracy; performance depends synergistically on encoder choice and model capacity. To our knowledge, this is the first systematic empirical study of audio pre-trained models for synthesizer surrogate learning.

Technology Category

Application Category

📝 Abstract
Deep learning appears as an appealing solution for Automatic Synthesizer Programming (ASP), which aims to assist musicians and sound designers in programming sound synthesizers. However, integrating software synthesizers into training pipelines is challenging due to their potential non-differentiability. This work tackles this challenge by introducing a method to approximate arbitrary synthesizers. Specifically, we train a neural network to map synthesizer presets onto an audio embedding space derived from a pretrained model. This facilitates the definition of a neural proxy that produces compact yet effective representations, thereby enabling the integration of audio embedding loss into neural-based ASP systems for black-box synthesizers. We evaluate the representations derived by various pretrained audio models in the context of neural-based nASP and assess the effectiveness of several neural network architectures, including feedforward, recurrent, and transformer-based models, in defining neural proxies. We evaluate the proposed method using both synthetic and hand-crafted presets from three popular software synthesizers and assess its performance in a synthesizer sound matching downstream task. While the benefits of the learned representation are nuanced by resource requirements, encouraging results were obtained for all synthesizers, paving the way for future research into the application of synthesizer proxies for neural-based ASP systems.
Problem

Research questions and friction points this paper is trying to address.

Learning neural proxies for non-differentiable software synthesizers
Mapping synthesizer presets to perceptually informed audio embeddings
Enabling neural-based sound matching for black-box synthesizer systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural network mapping presets to audio embeddings
Pretrained models enabling black-box synthesizer integration
Evaluating multiple architectures for effective neural proxies
🔎 Similar Papers
No similar papers found.