🤖 AI Summary
This study investigates how knowledge workers attribute credit to AI in human–large language model (LLM) collaborative authorship. Method: Drawing on 155 structured surveys, we integrate open coding with statistical analysis to examine credit attribution patterns. Contribution/Results: We provide the first empirical evidence of systematic underattribution of AI contributions: AI receives significantly lower credibility than humans for equivalent input, and 92% of respondents mandate disclosure of AI involvement. We identify five key attribution criteria—output quality, value alignment, technical literacy, initiative, and explainability—moving beyond a narrow “authorship” lens to propose a multidimensional AI credit attribution model. These findings establish the first empirical foundation for AI attribution norms and inform the design and governance of trustworthy human–AI collaboration frameworks.
📝 Abstract
AI systems powered by large language models can act as capable assistants for writing and editing. In these tasks, the AI system acts as a co-creative partner, making novel contributions to an artifact-under-creation alongside its human partner(s). One question that arises in these scenarios is the extent to which AI should be credited for its contributions. We examined knowledge workers' views of attribution through a survey study (N=155) and found that they assigned different levels of credit across different contribution types, amounts, and initiative. Compared to a human partner, we observed a consistent pattern in which AI was assigned less credit for equivalent contributions. Participants felt that disclosing AI involvement was important and used a variety of criteria to make attribution judgments, including the quality of contributions, personal values, and technology considerations. Our results motivate and inform new approaches for crediting AI contributions to co-created work.