🤖 AI Summary
Current large language models (LLMs) suffer from linguistic interference and humor attenuation in cross-lingual humor translation, resulting in low humorous appeal and poor cultural adaptability. To address this, we propose a psychology-inspired Humor Decomposition Mechanism (HDM), which integrates classical humor theories—such as incongruity, surprise, and relief—with chain-of-thought (CoT) reasoning to guide LLMs in stepwise identification, deconstruction, and reconstruction of humor elements. HDM requires no additional training and is plug-and-play compatible with mainstream LLMs. Automatic evaluation on open-source humor datasets shows statistically significant improvements: +7.75% in humor preservation, +2.81% in fluency, and +6.13% in semantic consistency over strong baselines. This work constitutes the first systematic integration of cognitive humor theory into LLM-based machine translation frameworks, establishing a novel paradigm for culture-sensitive translation.
📝 Abstract
Humour translation plays a vital role as a bridge between different cultures, fostering understanding and communication. Although most existing Large Language Models (LLMs) are capable of general translation tasks, these models still struggle with humour translation, which is especially reflected through linguistic interference and lacking humour in translated text. In this paper, we propose a psychology-inspired Humour Decomposition Mechanism (HDM) that utilises Chain-of-Thought (CoT) to imitate the ability of the human thought process, stimulating LLMs to optimise the readability of translated humorous texts. Moreover, we integrate humour theory in HDM to further enhance the humorous elements in the translated text. Our automatic evaluation experiments on open-source humour datasets demonstrate that our method significantly improves the quality of humour translation, yielding average gains of 7.75% in humour, 2.81% in fluency, and 6.13% in coherence of the generated text.