Training Emergent Joint Associations: A Reinforcement Learning Approach to Creative Thinking in Language Models

📅 2025-11-21
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limitation of language models in associative thinking—i.e., cross-conceptual reasoning—by enhancing their creative capabilities. We propose a reinforcement learning–based fine-tuning framework that, for the first time, incorporates divergent thinking metrics—novelty and conceptual connectivity—into the reward function. To enable scalable evaluation, we design a prompt-driven, unsupervised assessment mechanism that encourages models to autonomously construct deep, cross-domain associations. The method operates entirely without human annotation, relying solely on self-supervised prompt generation to derive reward signals. Experiments across diverse generative tasks—including story writing, code generation, and diagram synthesis—demonstrate substantial improvements in originality, logical coherence, and abstract cross-task transfer. Our approach consistently outperforms baseline models on multiple creativity-oriented metrics. The core contribution is a learnable associative reasoning mechanism that jointly enhances creative generation and abstract inference.

Technology Category

Application Category

📝 Abstract
Associative thinking--the ability to connect seemingly unrelated ideas--is a foundational element of human creativity and problem-solving. This paper explores whether reinforcement learning (RL) guided by associative thinking principles can enhance a model's performance across diverse generative tasks, including story writing, code generation, and chart creation. We introduce a reinforcement learning framework that uses a prompt-based evaluation mechanism, incorporating established divergent thinking metrics from creativity research. A base language model is fine-tuned using this framework to reward outputs demonstrating higher novelty through higher degrees of conceptual connectivity. Interestingly, the experimental results suggest that RL-based associative thinking-trained models not only generate more original and coherent stories but also exhibit improved abstraction and flexibility in tasks such as programming and data visualization. Our findings provide initial evidence that modeling cognitive creativity principles through reinforcement learning can yield more adaptive and generative AI.
Problem

Research questions and friction points this paper is trying to address.

Enhancing language model creativity through reinforcement learning guided by associative thinking principles
Improving generative task performance in story writing, code generation, and chart creation
Developing AI models with greater novelty, abstraction, and conceptual connectivity in outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning framework with prompt-based evaluation
Fine-tuning language models for conceptual connectivity
Applying divergent thinking metrics from creativity research
🔎 Similar Papers