HUMORCHAIN: Theory-Guided Multi-Stage Reasoning for Interpretable Multimodal Humor Generation

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of cognitive grounding and interpretability in multimodal humor generation by proposing the first framework that explicitly models classical humor theories—such as incongruity-resolution—as a multi-stage reasoning chain. Methodologically, it integrates visual-semantic parsing, a stepwise humor reasoning module grounded in cognitive psychology, and a fine-tuned humor discriminator to construct an end-to-end interpretable generation system. The core contribution lies in concretizing abstract humor mechanisms into controllable, traceable reasoning paths, thereby achieving cognitive alignment between visual understanding and humorous expression. Evaluated on multiple benchmarks, the approach significantly outperforms state-of-the-art methods; human evaluation shows a 12.7% improvement in humor preference (Elo/BT scores), alongside enhanced semantic diversity and cognitive depth.

Technology Category

Application Category

📝 Abstract
Humor, as both a creative human activity and a social binding mechanism, has long posed a major challenge for AI generation. Although producing humor requires complex cognitive reasoning and social understanding, theories of humor suggest that it follows learnable patterns and structures, making it theoretically possible for generative models to acquire them implicitly. In recent years, multimodal humor has become a prevalent form of online communication, especially among Gen Z, highlighting the need for AI systems capable of integrating visual understanding with humorous language generation. However, existing data-driven approaches lack explicit modeling or theoretical grounding of humor, often producing literal descriptions that fail to capture its underlying cognitive mechanisms, resulting in the generated image descriptions that are fluent but lack genuine humor or cognitive depth. To address this limitation, we propose HUMORCHAIN (HUmor-guided Multi-step Orchestrated Reasoning Chain for Image Captioning), a theory-guided multi-stage reasoning framework. It integrates visual semantic parsing, humor- and psychology-based reasoning, and a fine-tuned discriminator for humor evaluation, forming an interpretable and controllable cognitive reasoning chain. To the best of our knowledge, this is the first work to explicitly embed cognitive structures from humor theories into multimodal humor generation, enabling a structured reasoning process from visual understanding to humor creation. Experiments on Meme-Image-No-Text, Oogiri-GO, and OxfordTVG-HIC datasets show that HUMORCHAIN outperforms state-of-the-art baselines in human humor preference, Elo/BT scores, and semantic diversity, demonstrating that theory-driven structured reasoning enables large language models to generate humor aligned with human perception.
Problem

Research questions and friction points this paper is trying to address.

Generates multimodal humor by integrating visual understanding with language
Addresses lack of theoretical grounding in existing humor generation methods
Embeds cognitive structures from humor theories for interpretable reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Theory-guided multi-stage reasoning chain for humor generation
Integrates visual parsing with humor psychology reasoning
Embeds cognitive structures from humor theories into generation
🔎 Similar Papers
No similar papers found.