🤖 AI Summary
It remains unclear whether large language models’ (LLMs’) creative capabilities improve with model iteration, and whether such capabilities exhibit cross-model or intra-model stability.
Method: We systematically evaluate 14 state-of-the-art LLMs on two established creativity benchmarks—the Divergent Association Task (DAT) and the Alternative Uses Task (AUT)—using multi-round sampling, rigorous statistical testing, and cross-model comparison.
Contribution/Results: Contrary to expectations, LLM creativity does not significantly increase with model generation; GPT-4 even shows a decline. Only 0.28% of model outputs reach the top 10% human原创ity threshold. While all models surpass the human mean score, intra-model output variability is extremely high—revealing severe inconsistency. This work is the first to quantitatively expose the “high-mean, low-consistency” paradox in LLM creativity, empirically demonstrating that prompt engineering optimization and repeated evaluation are essential for reliable assessment.
📝 Abstract
Following the widespread adoption of ChatGPT in early 2023, numerous studies reported that large language models (LLMs) can match or even surpass human performance in creative tasks. However, it remains unclear whether LLMs have become more creative over time, and how consistent their creative output is. In this study, we evaluated 14 widely used LLMs -- including GPT-4, Claude, Llama, Grok, Mistral, and DeepSeek -- across two validated creativity assessments: the Divergent Association Task (DAT) and the Alternative Uses Task (AUT). Contrary to expectations, we found no evidence of increased creative performance over the past 18-24 months, with GPT-4 performing worse than in previous studies. For the more widely used AUT, all models performed on average better than the average human, with GPT-4o and o3-mini performing best. However, only 0.28% of LLM-generated responses reached the top 10% of human creativity benchmarks. Beyond inter-model differences, we document substantial intra-model variability: the same LLM, given the same prompt, can produce outputs ranging from below-average to original. This variability has important implications for both creativity research and practical applications. Ignoring such variability risks misjudging the creative potential of LLMs, either inflating or underestimating their capabilities. The choice of prompts affected LLMs differently. Our findings underscore the need for more nuanced evaluation frameworks and highlight the importance of model selection, prompt design, and repeated assessment when using Generative AI (GenAI) tools in creative contexts.