CreativeBench: Benchmarking and Enhancing Machine Creativity via Self-Evolving Challenges

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of rigorous, quantitative benchmarks for evaluating machine creativity, which hinders the development of self-evolving generative systems. The authors propose CreativeBench—the first evaluation framework specifically designed to assess machine creativity in code generation—grounded in cognitive theory and featuring two task types: combinatorial and exploratory. Automated assessment is achieved through executable code validation, reverse engineering, and self-play mechanisms, with a unified metric defined as the product of quality and novelty. The study reveals a nonlinear relationship between model scale and creativity type: scaling substantially enhances combinatorial creativity but yields diminishing returns for exploratory capacity, exhibiting a phenomenon termed “convergent scaling.” Furthermore, the authors introduce EvoRePE, a reasoning-time prompting strategy that effectively internalizes evolutionary patterns and consistently improves creative performance across diverse models.

Technology Category

Application Category

📝 Abstract
The saturation of high-quality pre-training data has shifted research focus toward evolutionary systems capable of continuously generating novel artifacts, leading to the success of AlphaEvolve. However, the progress of such systems is hindered by the lack of rigorous, quantitative evaluation. To tackle this challenge, we introduce CreativeBench, a benchmark for evaluating machine creativity in code generation, grounded in a classical cognitive framework. Comprising two subsets -- CreativeBench-Combo and CreativeBench-Explore -- the benchmark targets combinatorial and exploratory creativity through an automated pipeline utilizing reverse engineering and self-play. By leveraging executable code, CreativeBench objectively distinguishes creativity from hallucination via a unified metric defined as the product of quality and novelty. Our analysis of state-of-the-art models reveals distinct behaviors: (1) scaling significantly improves combinatorial creativity but yields diminishing returns for exploration; (2) larger models exhibit ``convergence-by-scaling,'' becoming more correct but less divergent; and (3) reasoning capabilities primarily benefit constrained exploration rather than combination. Finally, we propose EvoRePE, a plug-and-play inference-time steering strategy that internalizes evolutionary search patterns to consistently enhance machine creativity.
Problem

Research questions and friction points this paper is trying to address.

machine creativity
evaluation benchmark
evolutionary systems
code generation
quantitative assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

machine creativity
code generation
evolutionary systems
creativity benchmark
inference-time steering
🔎 Similar Papers
No similar papers found.