🤖 AI Summary
This study investigates undergraduate students’ misuse of large language models (LLMs) for academic dishonesty in perceived violation scenarios within software engineering education, and its relationship to assessment design and instructional guidance. Drawing on a cross-national cross-sectional survey of 116 students—integrating questionnaires, quantitative statistics, and qualitative content analysis—the research systematically reveals that inappropriate LLM use occurs predominantly in programming assignments and routine coursework, while being less frequent but more clearly recognized as misconduct during examinations. Although students generally acknowledge the consequences of such behavior, they perceive institutional penalties as limited. The findings indicate that time pressure and ambiguous instructional guidance significantly correlate with LLM misuse, underscoring the need to reconceptualize learning objectives and redesign assessments to address the academic integrity challenges posed by AI technologies.
📝 Abstract
Background: Cheating in university education is commonly described as context dependent and influenced by assessment design, institutional norms, and student interpretation. In software engineering education, programming oriented coursework has historically involved ambiguity around collaboration, reuse, and external assistance. Recently, large language models (LLMs) have introduced additional mediation in the production of code and related artifacts. Aims: This study investigates how software engineering students describe experiences of using LLMs in ways they perceived as inappropriate, disallowed, or misaligned with course expectations. Method: A cross sectional survey was conducted with 116 undergraduate software engineering students from multiple countries, combining quantitative summaries with qualitative data. Results: Reported LLM cheating practices occurred primarily in programming assignments, routine coursework, and documentation tasks, often in contexts of time pressure and unclear guidance. Use during quizzes and exams was less frequent and more consistently identified as a violation. Students reported awareness of academic and professional consequences regarding LLM cheating, while formal sanctions were perceived as limited. Conclusions: Our study indicates that reported LLM misuse in software engineering is associated with assessment and instructional conditions, suggesting a need for clearer alignment between assessment design, learning objectives, and expectations for LLM use.