Beyond Productivity: Rethinking the Impact of Creativity Support Tools

πŸ“… 2025-05-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current evaluations of Creative Support Tools (CSTs) overemphasize productivity and output quality while neglecting user-centered dimensions. Method: We conduct a systematic review of 173 CST evaluation studies, integrating bibliometric analysis with critical Human-Computer Interaction (HCI) methodology assessment to identify persistent evaluative biases. Contribution/Results: Our analysis reveals self-reflection and subjective well-being as critically undermeasured dimensions. We propose the first multidimensional evaluation framework specifically designed for generative AI–enhanced CSTs, shifting beyond traditional efficacy-oriented paradigms toward holistic, human-centered assessment. The framework explicitly incorporates affective, cognitive, and metacognitive dimensions and calls for the development of domain-specific validity metrics. This work establishes a theoretical foundation and practical roadmap for next-generation CST evaluation, advancing the design and deployment of human-centered AI creative tools.

Technology Category

Application Category

πŸ“ Abstract
Creativity Support Tools (CSTs) are widely used across diverse creative domains, with generative AI recently increasing the abilities of CSTs. To better understand how the success of CSTs is determined in the literature, we conducted a review of outcome measures used in CST evaluations. Drawing from (n=173) CST evaluations in the ACM Digital Library, we identified the metrics commonly employed to assess user interactions with CSTs. Our findings reveal prevailing trends in current evaluation practices, while exposing underexplored measures that could broaden the scope of future research. Based on these results, we argue for a more holistic approach to evaluating CSTs, encouraging the HCI community to consider not only user experience and the quality of the generated output, but also user-centric aspects such as self-reflection and well-being as critical dimensions of assessment. We also highlight a need for validated measures specifically suited to the evaluation of generative AI in CSTs.
Problem

Research questions and friction points this paper is trying to address.

Reviewing outcome measures in CST evaluations
Identifying gaps in current CST assessment practices
Advocating holistic evaluation including user well-being
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reviewing outcome measures in CST evaluations
Identifying underexplored metrics for future research
Advocating holistic evaluation including user well-being
πŸ”Ž Similar Papers
No similar papers found.
Samuel Rhys Cox
Samuel Rhys Cox
Aalborg University
Human-Computer InteractionConversational AgentsHuman-Centered AISocial Computing
H
Helena Bojer Djernaes
Aalborg University, Aalborg, Denmark
N
N. V. Berkel
Aalborg University, Aalborg, Denmark