🤖 AI Summary
Current audio captioning models heavily rely on costly, labor-intensive paired annotations that poorly reflect real-world user preferences. To address this, we propose the first Reinforcement Learning from Human Feedback (RLHF) framework for audio description generation, which innovatively leverages human pairwise preference data—rather than ground-truth textual captions—to train an unsupervised reward model based on CLAP. Our method integrates contrastive language–audio pretraining, pairwise preference modeling, and policy fine-tuning via reinforcement learning, eliminating the need for supervised caption labels. Extensive human evaluations across AudioCaps, Clotho, and other benchmarks demonstrate that our approach significantly outperforms supervised baselines, achieving state-of-the-art performance in naturalness, accuracy, and alignment with human preferences—on par with fully supervised methods.
📝 Abstract
Current audio captioning systems rely heavily on supervised learning with paired audio-caption datasets, which are expensive to curate and may not reflect human preferences in real-world scenarios. To address this limitation, we propose a preference-aligned audio captioning framework based on Reinforcement Learning from Human Feedback (RLHF). To effectively capture nuanced human preferences, we train a Contrastive Language-Audio Pretraining (CLAP)-based reward model using human-labeled pairwise preference data. This reward model is integrated into a reinforcement learning framework to fine-tune any baseline captioning system without relying on ground-truth caption annotations. Extensive human evaluations across multiple datasets show that our method produces captions preferred over those from baseline models, particularly in cases where the baseline models fail to provide correct and natural captions. Furthermore, our framework achieves performance comparable to supervised approaches with ground-truth data, demonstrating its effectiveness in aligning audio captioning with human preferences and its scalability in real-world scenarios.