🤖 AI Summary
This paper addresses the challenge of humor and sarcasm detection in Hindi-English code-mixed text. To tackle this, we propose three methodological improvements: (1) augmenting training data with monolingual native samples to enhance lexical and syntactic diversity; (2) designing a multi-task learning framework that jointly models humor/sarcasm and hate speech detection, built upon BERT and masked language modeling (MLM) architectures; and (3) exploring few-shot prompting with multilingual large language models (LLMs), including XLM-R and mT5. To our knowledge, this is the first systematic study validating the efficacy of monolingual data injection and cross-task multi-task learning for code-mixed sarcasm identification. Experimental results show that multi-task learning yields the largest gains—improving humor F1 by 10.67% and sarcasm F1 by 12.35%—while few-shot prompting with LLMs delivers limited performance. All code, data splits, and experimental configurations are publicly released to ensure full reproducibility.
📝 Abstract
In this paper, we reported our experiments with various strategies to improve code-mixed humour and sarcasm detection. We did all of our experiments for Hindi-English code-mixed scenario, as we have the linguistic expertise for the same. We experimented with three approaches, namely (i) native sample mixing, (ii) multi-task learning (MTL), and (iii) prompting very large multilingual language models (VMLMs). In native sample mixing, we added monolingual task samples in code-mixed training sets. In MTL learning, we relied on native and code-mixed samples of a semantically related task (hate detection in our case). Finally, in our third approach, we evaluated the efficacy of VMLMs via few-shot context prompting. Some interesting findings we got are (i) adding native samples improved humor (raising the F1-score up to 6.76%) and sarcasm (raising the F1-score up to 8.64%) detection, (ii) training MLMs in an MTL framework boosted performance for both humour (raising the F1-score up to 10.67%) and sarcasm (increment up to 12.35% in F1-score) detection, and (iii) prompting VMLMs couldn't outperform the other approaches. Finally, our ablation studies and error analysis discovered the cases where our model is yet to improve. We provided our code for reproducibility.