🤖 AI Summary
This work addresses the lack of systematic evaluation and benchmark datasets for pun comprehension in spoken language within current large audio-language models. To bridge this gap, the authors introduce APUN-Bench, the first benchmark specifically designed for audio pun understanding, comprising 4,434 human-annotated audio samples organized into three staged tasks: pun detection, keyword localization, and meaning inference. The study conducts a comprehensive evaluation of ten state-of-the-art models, revealing significant deficiencies in handling positional bias and performing semantic reasoning. By establishing a structured evaluation framework and providing empirical analysis, this research lays the foundational groundwork for advancing machine comprehension of spoken humor.
📝 Abstract
Puns represent a typical linguistic phenomenon that exploits polysemy and phonetic ambiguity to generate humour, posing unique challenges for natural language understanding. Within pun research, audio plays a central role in human communication except text and images, while datasets and systematic resources for spoken puns remain scarce, leaving this crucial modality largely underexplored. In this paper, we present APUN-Bench, the first benchmark dedicated to evaluating large audio language models (LALMs) on audio pun understanding. Our benchmark contains 4,434 audio samples annotated across three stages: pun recognition, pun word location and pun meaning inference. We conduct a deep analysis of APUN-Bench by systematically evaluating 10 state-of-the-art LALMs, uncovering substantial performance gaps in recognizing, localizing, and interpreting audio puns. This analysis reveals key challenges, such as positional biases in audio pun location and error cases in meaning inference, offering actionable insights for advancing humour-aware audio intelligence.