๐ค AI Summary
This study addresses the blurring boundaries of authorship in programming education due to generative AI, where students lack clear norms for disclosing AI assistance. Through a factorial vignette experiment manipulating assessment type, AI autonomy, and student involvement, the research examines authorship attributions and disclosure preferences among 94 computer science students across 102 scenarios. Findings indicate that the degree of AI assistance and the extent of human refinement effort are key determinants of authorship judgments, and studentsโ perceptions of authorship significantly predict their expectations regarding disclosure policies. Building on these insights, the paper proposes shifting from declarative disclosure policies to a process-oriented attribution framework, reconceptualizing AI disclosure as a pedagogical tool that fosters studentsโ critical engagement with AI-generated content.
๐ Abstract
Generative AI blurs the lines of authorship in computing education, creating uncertainty around how students should attribute AI assistance. To examine these emerging norms, we conducted a factorial vignette study with 94 computer science students across 102 unique scenarios, systematically manipulating assessment type, AI autonomy, student activity, prior knowledge, and human refinement effort. This paper details how these factors influence students'perceptions of ownership and disclosure preferences. Our findings indicate that attribution judgments are primarily driven by different levels of AI assistance and human refinement. We also found that students'perception of authorship significantly predicts their policy expectations. We conclude by proposing a shift from statement-style policies to process-oriented attribution, transforming disclosure into a pedagogical mechanism for fostering critical engagement with AI-generated content.