Observing Without Doing: Pseudo-Apprenticeship Patterns in Student LLM Use

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how CS1 novices integrate large language models (LLMs) into programming problem-solving. Using a mixed-methods approach—including think-aloud protocols, surveys, and in-depth interviews—we analyze student behavior across programming tasks varying in openness and familiarity. We identify a prevalent “pseudo-apprentice” pattern: students actively invoke LLMs to generate high-quality code yet systematically bypass core cognitive apprenticeship stages—modeling, scaffolding, and fading—resulting in a pronounced misalignment among learning intent, actual practice, and self-perceived competence. This construct offers a novel theoretical lens for understanding AI-dependent learning. Furthermore, we characterize recurrent usage patterns that impede the development of independent programming proficiency. Based on these findings, we propose actionable, evidence-informed strategies for pedagogical intervention and LLM system design aimed at fostering metacognitive awareness and sustainable skill acquisition.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) such as ChatGPT have quickly become part of student programmers' toolkits, whether allowed by instructors or not. This paper examines how introductory programming (CS1) students integrate LLMs into their problem-solving processes. We conducted a mixed-methods study with 14 undergraduates completing three programming tasks while thinking aloud and permitted to access any resources they choose. The tasks varied in open-endedness and familiarity to the participants and were followed by surveys and interviews. We find that students frequently adopt a pattern we call pseudo-apprenticeship, where students engage attentively with expert-level solutions provided by LLMs but fail to participate in the stages of cognitive apprenticeship that promote independent problem-solving. This pattern was augmented by disconnects between students' intentions, actions, and self-perceived behavior when using LLMs. We offer design and instructional interventions for promoting learning and addressing the patterns of dependent AI use observed.
Problem

Research questions and friction points this paper is trying to address.

Examining how CS1 students integrate LLMs into programming problem-solving
Identifying pseudo-apprenticeship patterns hindering independent problem-solving development
Addressing disconnects between student intentions and actual LLM usage behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pseudo-apprenticeship pattern in student LLM use
Mixed-methods study with programming tasks observation
Design interventions for dependent AI use mitigation
🔎 Similar Papers
No similar papers found.
J
Jade Hak
University of Southern California, Department of Computer Science
N
Nathaniel Lam Johnson
University of Southern California, Department of Computer Science
Matin Amoozadeh
Matin Amoozadeh
University of Houston
Human-Computer InteractionComputing EducationAI /ML
Amin Alipour
Amin Alipour
University of Houston
Software EngineeringComputing Education
S
Souti Chattopadhyay
University of Southern California, Department of Computer Science