🤖 AI Summary
Existing sound effect libraries commonly lack structured, information-rich metadata, which limits the performance of audio language models in professional sound retrieval and description tasks. This work proposes Audiocards—a structured metadata framework tailored for sound design—that uniquely integrates acoustic attributes and timbral descriptors into audio language modeling. By leveraging the world knowledge of large language models (LLMs), Audiocards automatically generates high-quality metadata, moving beyond the limitations of conventional single-sentence captions. The approach significantly improves performance on text-to-audio retrieval, descriptive caption generation, and automatic metadata annotation within professional sound effect libraries, while also outperforming current baselines on general-purpose audio tasks.
📝 Abstract
Sound designers search for sounds in large sound effects libraries using aspects such as sound class or visual context. However, the metadata needed for such search is often missing or incomplete, and requires significant manual effort to add. Existing solutions to automate this task by generating metadata, i.e. captioning, and search using learned embeddings, i.e. text-audio retrieval, are not trained on metadata with the structure and information pertinent to sound design. To this end we propose audiocards, structured metadata grounded in acoustic attributes and sonic descriptors, by exploiting the world knowledge of LLMs. We show that training on audiocards improves downstream text-audio retrieval, descriptive captioning, and metadata generation on professional sound effects libraries. Moreover, audiocards also improve performance on general audio captioning and retrieval over the baseline single-sentence captioning approach. We release a curated dataset of sound effects audiocards to invite further research in audio language modeling for sound design.