Investigating Student Interaction Patterns with Large Language Model-Powered Course Assistants in Computer Science Courses

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the untimely and inflexible post-class academic support in computer science education. Conducting the first large-scale, real-world analysis across multiple universities, it examines interaction logs from over 2,000 students with LLM-powered course assistants. Using conversation analysis, Bloom’s Taxonomy–based evaluation, inquiry-based learning strategy analysis, and manual annotation, the study reveals: (1) peak usage during nighttime hours and higher engagement in introductory courses; (2) high response accuracy but insufficient illustrative examples and inadequate generation of higher-order cognitive questions; and (3) low student uptake of advanced follow-up prompts. The work innovatively identifies three critical gaps: temporal support misalignment, learner heterogeneity in proficiency levels, and LLMs’ limitations in fostering higher-order thinking. It proposes an educationally grounded LLM system design framework—centered on pedagogical objectives—to enhance ecological integration of intelligent educational assistants, offering empirical evidence and actionable design principles for AI-augmented teaching.

Technology Category

Application Category

📝 Abstract
Providing students with flexible and timely academic support is a challenge at most colleges and universities, leaving many students without help outside scheduled hours. Large language models (LLMs) are promising for bridging this gap, but interactions between students and LLMs are rarely overseen by educators. We developed and studied an LLM-powered course assistant deployed across multiple computer science courses to characterize real-world use and understand pedagogical implications. By Spring 2024, our system had been deployed to approximately 2,000 students across six courses at three institutions. Analysis of the interaction data shows that usage remains strong in the evenings and nights and is higher in introductory courses, indicating that our system helps address temporal support gaps and novice learner needs. We sampled 200 conversations per course for manual annotation: most sampled responses were judged correct and helpful, with a small share unhelpful or erroneous; few responses included dedicated examples. We also examined an inquiry-based learning strategy: only around 11% of sampled conversations contained LLM-generated follow-up questions, which were often ignored by students in advanced courses. A Bloom's taxonomy analysis reveals that current LLM capabilities are limited in generating higher-order cognitive questions. These patterns suggest opportunities for pedagogically oriented LLM-based educational systems and greater educator involvement in configuring prompts, content, and policies.
Problem

Research questions and friction points this paper is trying to address.

Addressing temporal academic support gaps for students outside scheduled hours
Investigating unsupervised student-LLM interactions without educator oversight
Limitations in generating higher-order cognitive questions and examples
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-powered course assistant for academic support
Deployed across multiple computer science courses
Analyzed interaction patterns and pedagogical implications
🔎 Similar Papers
No similar papers found.
C
Chang Liu
Colorado School of Mines, Golden, USA
Loc Hoang
Loc Hoang
HiTA AI Inc., Santa Clara, USA
A
Andrew Stolman
HiTA AI Inc., Santa Clara, USA
Rene F. Kizilcec
Rene F. Kizilcec
Associate Professor, Cornell University
EducationArtificial IntelligenceTeaching and LearningHCI
B
Bo Wu
Colorado School of Mines, Golden, USA