🤖 AI Summary
This study investigates the motivations and mechanisms underlying authors’ disclosure of AI use in human–AI collaborative writing. Method: Drawing on human–computer interaction and behavioral science theories, we develop—first in the literature—a four-dimensional framework of AI disclosure determinants, encompassing motivation, context, social norms, and technological perception. Through theoretical integration, empirical induction, and factor clustering analysis, we identify and categorize 12 core factors (e.g., perceived accountability attribution, audience expectations, platform policies, and perceived AI controllability). Contribution/Results: Moving beyond a narrow technical lens, the study proposes a systematic, actionable transparency impact model. It provides empirically grounded insights and theoretical foundations for designing differentiated AI disclosure promotion strategies and ethical governance practices in collaborative authorship contexts.
📝 Abstract
Have you ever read a blog or social media post and suspected that it was written--at least in part--by artificial intelligence (AI)? While transparently acknowledging contributors to writing is generally valued, why some writers choose to disclose or withhold AI involvement remains unclear. In this work, we ask what factors shape writers' decisions to disclose their AI use as a starting point to effectively advocate for transparency. To shed light on this question, we synthesize study findings and theoretical frameworks in human-AI interaction and behavioral science. Concretely, we identify and curate a list of factors that could affect writers' decisions regarding disclosure for human-AI co-created content.