🤖 AI Summary
This study investigates the tension between students’ overreliance on large language models (LLMs) and instructor-imposed pedagogical norms in computer science education. Through qualitative user research integrating contextual analysis and intent modeling, the authors identify seven key user intents across five representative usage scenarios and evaluate their alignment with established teaching guidelines. Findings reveal significant normative conflicts in contexts such as writing generation and programming assessments, whereas higher consensus exists in revision, error correction, and information retrieval tasks. Notably, instructors are increasingly incorporating LLM usage logs into grading rubrics, signaling a shift in policy from outright prohibition toward integrated assessment. Building on these insights, the paper proposes new interaction design principles for LLMs tailored to educational settings, offering practical guidance for the responsible integration of generative AI into teaching and learning practices.
📝 Abstract
Prior research has raised concerns about students'over-reliance on large language models (LLMs) in higher education. This paper examines how Computer Science students and instructors engage with LLMs across five scenarios:"Writing","Quiz","Programming","Project-based learning", and"Information retrieval". Through user studies with 16 students and 6 instructors, we identify 7 key intents, including increasingly complex student practices. Findings reveal varying levels of conflict between student practices and instructor norms, ranging from clear conflict in"Writing-generation"and"(Programming) quiz-solving", through partial conflict in"Programming project-implementation"and"Project-based learning", to broad agreement in"Writing-revision&ideation","(Programming) quiz-correction"and"Info-query&summary". We document instructors are shifting from prohibiting to recognizing students'use of LLMs for high-quality work, integrating usage records into assessment grading. Finally, we propose LLM design guidelines: deploying default guardrails with game-like and empathetic interaction to prevent students from"deserting"LLMs, especially for"Writing-generation", while utilizing comprehension checks in low-conflict intents to promote learning.