🤖 AI Summary
This study addresses the underexplored yet critical role of speech in irony detection, where prior work over-relies on textual cues and neglects cross-cultural and multilingual adaptability. We present the first dedicated survey on speech-centric irony recognition, systematically reviewing the evolution from unimodal speech to speech-text multimodal fusion. Our analysis covers benchmark datasets, acoustic feature engineering, deep representation learning, and multimodal integration techniques—highlighting the decisive influence of prosodic cues (e.g., intonation, rhythm) on ironic intent inference. Key contributions include: (1) establishing speech as the core modality reflecting irony’s multimodal nature; (2) identifying structural gaps in existing datasets regarding cultural coverage and linguistic diversity; and (3) proposing three future directions—cross-lingual acoustic modeling, culture-aware prosody analysis, and lightweight multimodal architectures—to advance irony computation from monolingual text-based paradigms toward universal, multimodal human-computer interaction.
📝 Abstract
Sarcasm, a common feature of human communication, poses challenges in interpersonal interactions and human-machine interactions. Linguistic research has highlighted the importance of prosodic cues, such as variations in pitch, speaking rate, and intonation, in conveying sarcastic intent. Although previous work has focused on text-based sarcasm detection, the role of speech data in recognizing sarcasm has been underexplored. Recent advancements in speech technology emphasize the growing importance of leveraging speech data for automatic sarcasm recognition, which can enhance social interactions for individuals with neurodegenerative conditions and improve machine understanding of complex human language use, leading to more nuanced interactions. This systematic review is the first to focus on speech-based sarcasm recognition, charting the evolution from unimodal to multimodal approaches. It covers datasets, feature extraction, and classification methods, and aims to bridge gaps across diverse research domains. The findings include limitations in datasets for sarcasm recognition in speech, the evolution of feature extraction techniques from traditional acoustic features to deep learning-based representations, and the progression of classification methods from unimodal approaches to multimodal fusion techniques. In so doing, we identify the need for greater emphasis on cross-cultural and multilingual sarcasm recognition, as well as the importance of addressing sarcasm as a multimodal phenomenon, rather than a text-based challenge.