🤖 AI Summary
This work addresses the high data acquisition cost in parametric guitar amplifier modeling. We propose Panama, an end-to-end differentiable modeling framework that integrates LSTM with a WaveNet-inspired architecture, coupled with a novel ensemble-based active learning strategy that maximizes model disagreement via gradient-based optimization to dynamically select the most informative knob configurations. This significantly reduces the required number of measurements. Experiments show that Panama achieves audio fidelity comparable to the non-parametric benchmark NAM on MUSHRA subjective listening tests using only 75 parameter settings—substantially lowering hardware measurement overhead. The core contribution is the first integration of differentiable active learning with parametric neural audio modeling, enabling high-fidelity, low-sample-count, and interpretable speaker cabinet response modeling.
📝 Abstract
We introduce Panama, an active learning framework to train parametric guitar amp models end-to-end using a combination of an LSTM model and a WaveNet-like architecture. With model, one can create a virtual amp by recording samples that are determined through an ensemble-based active learning strategy to minimize the amount of datapoints needed (i.e., amp knob settings). Our strategy uses gradient-based optimization to maximize the disagreement among ensemble models, in order to identify the most informative datapoints. MUSHRA listening tests reveal that, with 75 datapoints, our models are able to match the perceptual quality of NAM, the leading open-source non-parametric amp modeler.