🤖 AI Summary
Traditional sound synthesis methods struggle with fine-grained, interpretable timbre control, while text-to-audio generation models lack precise control over subtle timbral variations. To address this, we propose a similarity-driven conditional control framework that integrates DDSP with pretrained audio representation models (e.g., AST), mapping normalized timbre similarity vectors into continuous, interpretable latent-space control signals. We introduce a novel similarity encoding mechanism enabling cross-category timbre interpolation and quantitative regression-based control. Furthermore, we construct two dedicated evaluation datasets—Footstep-set and Impact-set—to rigorously assess timbral controllability and fidelity. Experiments demonstrate a statistically significant correlation between similarity scores and perceptual timbre variation (p < 0.01). The generated sounds achieve high fidelity and strong controllability, effectively supporting creative timbre blending and smooth timbral transitions.
📝 Abstract
Generating sound effects with controllable variations is a challenging task, traditionally addressed using sophisticated physical models that require in-depth knowledge of signal processing parameters and algorithms. In the era of generative and large language models, text has emerged as a common, human-interpretable interface for controlling sound synthesis. However, the discrete and qualitative nature of language tokens makes it difficult to capture subtle timbral variations across different sounds. In this research, we propose a novel similarity-based conditioning method for sound synthesis, leveraging differentiable digital signal processing (DDSP). This approach combines the use of latent space for learning and controlling audio timbre with an intuitive guiding vector, normalized within the range [0,1], to encode categorical acoustic information. By utilizing pre-trained audio representation models, our method achieves expressive and fine-grained timbre control. To benchmark our approach, we introduce two sound effect datasets--Footstep-set and Impact-set--designed to evaluate both controllability and sound quality. Regression analysis demonstrates that the proposed similarity score effectively controls timbre variations and enables creative applications such as timbre interpolation between discrete classes. Our work provides a robust and versatile framework for sound effect synthesis, bridging the gap between traditional signal processing and modern machine learning techniques.