Towards Diverse and Efficient Audio Captioning via Diffusion Models

📅 2024-09-14
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing audio captioning models are constrained by the autoregressive paradigm, suffering from inefficiency in generation speed and limited lexical and semantic diversity. This paper introduces DAC—the first non-autoregressive diffusion model for audio caption generation—pioneering the application of diffusion modeling to cross-modal text generation. DAC jointly models audio representations and textual denoising, enabling globally contextualized, parallel decoding without sequential token prediction. By leveraging the inherent stochasticity and multi-step sampling capability of diffusion models, DAC significantly improves both inference efficiency and semantic diversity. Extensive experiments on multiple benchmark datasets demonstrate that DAC achieves state-of-the-art performance across standard quality metrics (BLEU, METEOR, SPICE), as well as latency and descriptive coverage—consistently outperforming dominant autoregressive baselines (e.g., Transformer-based models) and large language model variants.

Technology Category

Application Category

📝 Abstract
We introduce Diffusion-based Audio Captioning (DAC), a non-autoregressive diffusion model tailored for diverse and efficient audio captioning. Although existing captioning models relying on language backbones have achieved remarkable success in various captioning tasks, their insufficient performance in terms of generation speed and diversity impede progress in audio understanding and multimedia applications. Our diffusion-based framework offers unique advantages stemming from its inherent stochasticity and holistic context modeling in captioning. Through rigorous evaluation, we demonstrate that DAC not only achieves SOTA performance levels compared to existing benchmarks in the caption quality, but also significantly outperforms them in terms of generation speed and diversity. The success of DAC illustrates that text generation can also be seamlessly integrated with audio and visual generation tasks using a diffusion backbone, paving the way for a unified, audio-related generative model across different modalities.
Problem

Research questions and friction points this paper is trying to address.

Improving audio captioning diversity and efficiency
Overcoming slow generation in existing caption models
Unifying audio and text generation via diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-autoregressive diffusion model for captioning
Enhances generation speed and diversity
Unifies audio and text generation tasks
🔎 Similar Papers
No similar papers found.