π€ AI Summary
This study addresses the blurring of authorship in large language model (LLM)-assisted writing, where collaborative text generation between users and models can obscure the userβs sense of ownership. Through an experiment with 302 participants, combining interaction logs with repeated measures of self-efficacy and trust, the research reveals a prevalent dynamic: self-efficacy tends to decline while trust in the LLM increases over the course of collaboration. The study establishes a mechanistic link among usersβ psychological states, prompting strategies, and perceived authorship. Participants with stable self-efficacy reported higher actual and perceived authorship, whereas those experiencing efficacy loss were more likely to delegate direct editing to the LLM, while those who regained efficacy actively sought feedback. These findings provide empirical grounding for designing collaborative writing systems that support robust authorship construction.
π Abstract
Large language models (LLMs) are increasingly used as collaborative partners in writing. However, this raises a critical challenge of authorship, as users and models jointly shape text across interaction turns. Understanding authorship in this context requires examining users'evolving internal states during collaboration, particularly self-efficacy and trust. Yet, the dynamics of these states and their associations with users'prompting strategies and authorship outcomes remain underexplored. We examined these dynamics through a study of 302 participants in LLM-assisted writing, capturing interaction logs and turn-by-turn self-efficacy and trust ratings. Our analysis showed that collaboration generally decreased users'self-efficacy while increasing trust. Participants who lost self-efficacy were more likely to ask the LLM to edit their work directly, whereas those who recovered self-efficacy requested more review and feedback. Furthermore, participants with stable self-efficacy showed higher actual and perceived authorship of the final text. Based on these findings, we propose design implications for understanding and supporting authorship in human-LLM collaboration.