Divergent-Convergent Thinking in Large Language Models for Creative Problem Generation

📅 2025-12-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from homogenization—termed “hive-mind thinking”—when generating educational questions, limiting creativity and diversity. Method: We propose CreativeDC, a two-stage prompting framework that decouples divergent exploration from convergent constraint. It is the first to formally integrate Wallas’s four-stage creativity model with Guilford’s divergent-convergent thinking framework, yielding an operational, structured chain-of-thought (CoT) prompting paradigm. We further design a quantitative diversity evaluation metric and a large-scale sampling efficacy analysis methodology. Contribution/Results: Experiments show CreativeDC significantly improves question diversity and novelty over baselines (+28.6%), while maintaining high practicality. Moreover, as sampling scale increases, the rate of effective question generation rises by 41.3%, demonstrating scalable creativity enhancement.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have significant potential for generating educational questions and problems, enabling educators to create large-scale learning materials. However, LLMs are fundamentally limited by the ``Artificial Hivemind'' effect, where they generate similar responses within the same model and produce homogeneous outputs across different models. As a consequence, students may be exposed to overly similar and repetitive LLM-generated problems, which harms diversity of thought. Drawing inspiration from Wallas's theory of creativity and Guilford's framework of divergent-convergent thinking, we propose CreativeDC, a two-phase prompting method that explicitly scaffolds the LLM's reasoning into distinct phases. By decoupling creative exploration from constraint satisfaction, our method enables LLMs to explore a broader space of ideas before committing to a final problem. We evaluate CreativeDC for creative problem generation using a comprehensive set of metrics that capture diversity, novelty, and utility. The results show that CreativeDC achieves significantly higher diversity and novelty compared to baselines while maintaining high utility. Moreover, scaling analysis shows that CreativeDC generates a larger effective number of distinct problems as more are sampled, increasing at a faster rate than baseline methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses the Artificial Hivemind effect limiting diversity in LLM-generated educational problems
Proposes a two-phase prompting method to separate creative exploration from constraint satisfaction
Enhances diversity and novelty of generated problems while maintaining their utility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-phase prompting method decouples creative exploration
Divergent-convergent thinking scaffolds LLM reasoning into distinct phases
Generates higher diversity and novelty while maintaining utility
🔎 Similar Papers