No Evidence for LLMs Being Useful in Problem Reframing

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the practical utility of large language models (LLMs) in problem reframing—a high-order creative design activity involving redefining problems through alternative conceptual lenses. Method: Adopting a human-AI co-creation experimental paradigm, we conducted a large-scale empirical evaluation with 280 designers, comparing three prompting strategies—free-form generation, direct instruction, and theory-informed structured prompting—while assessing outputs across multidimensional quality metrics (novelty, applicability, insightfulness) and measuring subjective agency via validated psychometric scales. Contribution/Results: Contrary to expectations, LLM assistance did not improve problem frame quality; instead, it widened the performance gap between expert and novice designers and significantly diminished novices’ sense of agency. No statistically significant gains were observed across conditions. This work provides the first systematic evidence of potential adverse effects of LLMs in advanced creative design tasks, challenging the “universal augmentation” hypothesis and offering critical boundary conditions for AI-augmented design practice.

Technology Category

Application Category

📝 Abstract
Problem reframing is a designerly activity wherein alternative perspectives are created to recast what a stated design problem is about. Generating alternative problem frames is challenging because it requires devising novel and useful perspectives that fit the given problem context. Large language models (LLMs) could assist this activity via their generative capability. However, it is not clear whether they can help designers produce high-quality frames. Therefore, we asked if there are benefits to working with LLMs. To this end, we compared three ways of using LLMs (N=280): 1) free-form, 2) direct generation, and 3) a structured approach informed by a theory of reframing. We found that using LLMs does not help improve the quality of problem frames. In fact, it increases the competence gap between experienced and inexperienced designers. Also, inexperienced ones perceived lower agency when working with LLMs. We conclude that there is no benefit to using LLMs in problem reframing and discuss possible factors for this lack of effect.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' effectiveness in generating high-quality problem frames.
Comparing three LLM usage methods for problem reframing tasks.
Evaluating impact of LLMs on designer competence and agency.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compared three LLM usage methods
Structured approach informed by theory
Assessed LLM impact on frame quality
🔎 Similar Papers