🤖 AI Summary
This study investigates the applicability of differentiable audio similarity metrics in iterative timbre matching, addressing the central question: “Does a universally optimal loss function exist?” We evaluate three synthesizer paradigms—subtractive, additive, and amplitude modulation—paired with four differentiable loss functions (including parameter divergence and spectrogram distance), yielding 16 synthesizer–loss combinations. Across 300 randomized optimization trials, performance is assessed via parameter error, spectral distance, and subjective listening evaluations. Results demonstrate that loss function efficacy is strongly architecture-dependent; no single loss achieves consistent optimality across synthesis paradigms. The three evaluation metrics exhibit moderate agreement, confirming method validity while underscoring context sensitivity. This work advances the development of synthesis-specific similarity metrics and provides theoretical foundations and practical guidance for data-driven sound design.
📝 Abstract
Manual sound design with a synthesizer is inherently iterative: an artist compares the synthesized output to a mental target, adjusts parameters, and repeats until satisfied. Iterative sound-matching automates this workflow by continually programming a synthesizer under the guidance of a loss function (or similarity measure) toward a target sound. Prior comparisons of loss functions have typically favored one metric over another, but only within narrow settings: limited synthesis methods, few loss types, often without blind listening tests. This leaves open the question of whether a universally optimal loss exists, or the choice of loss remains a creative decision conditioned on the synthesis method and the sound designer's preference. We propose differentiable iterative sound-matching as the natural extension of the available literature, since it combines the manual approach to sound design with modern advances in machine learning. To analyze the variability of loss function performance across synthesizers, we implemented a mix of four novel and established differentiable loss functions, and paired them with differentiable subtractive, additive, and AM synthesizers. For each of the sixteen synthesizer--loss combinations, we ran 300 randomized sound-matching trials. Performance was measured using parameter differences, spectrogram-distance metrics, and manually assigned listening scores. We observed a moderate level of consistency among the three performance measures. Our post-hoc analysis shows that the loss function performance is highly dependent on the synthesizer. These findings underscore the value of expanding the scope of sound-matching experiments and developing new similarity metrics tailored to specific synthesis techniques rather than pursuing one-size-fits-all solutions.