Bridging the Creativity Understanding Gap: Small-Scale Human Alignment Enables Expert-Level Humor Ranking in LLMs

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) significantly underperform humans in understanding creative content such as humor. Method: We propose a decoupled framework that decomposes humor comprehension into three sequential stages—visual understanding, reasoning-based explanation generation, and human preference alignment—supported by high-quality human annotations, LLM-generated explanations, small-scale crowd-sourced preference data, and multi-stage supervised fine-tuning. Contribution/Results: Our approach establishes the first preference-driven alignment paradigm for creative judgment. Empirically, fine-tuning with only ~100 human preference judgments achieves 82.4% accuracy on humor caption ranking—surpassing prior state-of-the-art by 15.4 percentage points and matching top human experts. Crucially, we falsify the efficacy of persona-based prompting, underscoring the indispensable role of preference data in alignment. The work advocates systematic curation of diverse, human-elicited creative preference datasets to advance AGI capabilities in creative domains.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown significant limitations in understanding creative content, as demonstrated by Hessel et al. (2023)'s influential work on the New Yorker Cartoon Caption Contest (NYCCC). Their study exposed a substantial gap between LLMs and humans in humor comprehension, establishing that understanding and evaluating creative content is key challenge in AI development. We revisit this challenge by decomposing humor understanding into three components and systematically improve each: enhancing visual understanding through improved annotation, utilizing LLM-generated humor reasoning and explanations, and implementing targeted alignment with human preference data. Our refined approach achieves 82.4% accuracy in caption ranking, singificantly improving upon the previous 67% benchmark and matching the performance of world-renowned human experts in this domain. Notably, while attempts to mimic subgroup preferences through various persona prompts showed minimal impact, model finetuning with crowd preferences proved remarkably effective. These findings reveal that LLM limitations in creative judgment can be effectively addressed through focused alignment to specific subgroups and individuals. Lastly, we propose the position that achieving artificial general intelligence necessitates systematic collection of human preference data across creative domains. We advocate that just as human creativity is deeply influenced by individual and cultural preferences, training LLMs with diverse human preference data may be essential for developing true creative understanding.
Problem

Research questions and friction points this paper is trying to address.

Improving LLMs' humor comprehension
Aligning LLMs with human preferences
Enhancing creative content understanding in AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhanced visual understanding annotation
LLM-generated humor reasoning utilization
Targeted human preference alignment
🔎 Similar Papers
No similar papers found.