MacGyver: Are Large Language Models Creative Problem Solvers?

📅 2023-11-16
🏛️ North American Chapter of the Association for Computational Linguistics
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the creative problem-solving capabilities of large language models (LLMs) under physical constraints. To this end, we introduce MACGYVER—the first benchmark dataset for unconventional object use, comprising 1,600+ realistic physical-scenario problems—and conduct the first systematic evaluation of LLMs’ limitations in physical feasibility reasoning and divergent (“out-of-the-box”) thinking, complemented by human–AI comparative assessment. We propose a “divergent–convergent” two-stage prompting framework with iterative reflection, integrating automated data generation, error attribution analysis, and structured chain-of-thought optimization. Results show that while LLMs excel at cross-domain knowledge transfer, they frequently produce physically infeasible solutions; our method improves solution feasibility by +28.3% and innovation by +22.1%. In contrast, humans exhibit strong familiarity bias and limited generalization. This work establishes a novel paradigm and empirical foundation for evaluating and enhancing embodied creativity in LLMs.
📝 Abstract
We explore the creative problem-solving capabilities of modern LLMs in a novel constrained setting. To this end, we create MACGYVER, an automatically generated dataset consisting of over 1,600 real-world problems deliberately designed to trigger innovative usage of objects and necessitate out-of-the-box thinking. We then present our collection to both LLMs and humans to compare and contrast their problem-solving abilities. MACGYVER is challenging for both groups, but in unique and complementary ways. For instance, humans excel in tasks they are familiar with but struggle with domain-specific knowledge, leading to a higher variance. In contrast, LLMs, exposed to a variety of specialized knowledge, attempt broader problems but fail by proposing physically-infeasible actions. Finally, we provide a detailed error analysis of LLMs, and demonstrate the potential of enhancing their problem-solving ability with novel prompting techniques such as iterative step-wise reflection and divergent-convergent thinking.This work (1) introduces a fresh arena for intelligent agents focusing on intricate aspects of physical reasoning, planning, and unconventional thinking, which supplements the existing spectrum of machine intelligence; and (2) provides insight into the constrained problem-solving capabilities of both humans and AI.
Problem

Research questions and friction points this paper is trying to address.

Exploring LLMs' creative problem-solving in constrained settings
Comparing human and AI problem-solving abilities using MACGYVER dataset
Enhancing LLMs' problem-solving with novel prompting techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated dataset generation
Iterative step-wise reflection
Divergent-convergent thinking
🔎 Similar Papers
No similar papers found.