Audiocards: Structured Metadata Improves Audio Language Models For Sound Design

📅 2026-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing sound effect libraries commonly lack structured, information-rich metadata, which limits the performance of audio language models in professional sound retrieval and description tasks. This work proposes Audiocards—a structured metadata framework tailored for sound design—that uniquely integrates acoustic attributes and timbral descriptors into audio language modeling. By leveraging the world knowledge of large language models (LLMs), Audiocards automatically generates high-quality metadata, moving beyond the limitations of conventional single-sentence captions. The approach significantly improves performance on text-to-audio retrieval, descriptive caption generation, and automatic metadata annotation within professional sound effect libraries, while also outperforming current baselines on general-purpose audio tasks.

Technology Category

Application Category

📝 Abstract
Sound designers search for sounds in large sound effects libraries using aspects such as sound class or visual context. However, the metadata needed for such search is often missing or incomplete, and requires significant manual effort to add. Existing solutions to automate this task by generating metadata, i.e. captioning, and search using learned embeddings, i.e. text-audio retrieval, are not trained on metadata with the structure and information pertinent to sound design. To this end we propose audiocards, structured metadata grounded in acoustic attributes and sonic descriptors, by exploiting the world knowledge of LLMs. We show that training on audiocards improves downstream text-audio retrieval, descriptive captioning, and metadata generation on professional sound effects libraries. Moreover, audiocards also improve performance on general audio captioning and retrieval over the baseline single-sentence captioning approach. We release a curated dataset of sound effects audiocards to invite further research in audio language modeling for sound design.
Problem

Research questions and friction points this paper is trying to address.

structured metadata
audio language models
sound design
text-audio retrieval
audio captioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

structured metadata
audio language models
sound design
text-audio retrieval
LLM-based captioning
🔎 Similar Papers
No similar papers found.