Exploring Student Behaviors and Motivations using AI TAs with Optional Guardrails

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates students’ motivations, usage patterns, and academic outcomes associated with an “optional guardrail” feature—allowing learners to autonomously choose between guided scaffolding or full solution access—in an introductory programming course, addressing concerns of overreliance and academic integrity. Method: We deployed a large language model–based AI teaching assistant integrated with fine-grained behavioral logging and real-time feedback mechanisms. Contribution/Results: First empirical evidence reveals that 50% of students disabled the guardrail at least once, and 14% did so throughout the course; low-performing students exhibited heightened pre-exam usage. Primary drivers included problem-solving needs, time pressure, deficient self-regulation, and exploratory motivation. Guardrail autonomy significantly correlated with academic performance, time management efficacy, and metacognitive awareness. These findings provide critical empirical grounding and a practical design framework for AI-powered educational tools that balance pedagogical scaffolding with learner agency.

Technology Category

Application Category

📝 Abstract
AI-powered chatbots and digital teaching assistants (AI TAs) are gaining popularity in programming education, offering students timely and personalized feedback. Despite their potential benefits, concerns about student over-reliance and academic misconduct have prompted the introduction of"guardrails"into AI TAs - features that provide scaffolded support rather than direct solutions. However, overly restrictive guardrails may lead students to bypass these tools and use unconstrained AI models, where interactions are not observable, thus limiting our understanding of students' help-seeking behaviors. To investigate this, we designed and deployed a novel AI TA tool with optional guardrails in one lab of a large introductory programming course. As students completed three code writing and debugging tasks, they had the option to receive guardrailed help or use a"See Solution"feature which disabled the guardrails and generated a verbatim response from the underlying model. We investigate students' motivations and use of this feature and examine the association between usage and their course performance. We found that 50% of the 885 students used the"See Solution"feature for at least one problem and 14% used it for all three problems. Additionally, low-performing students were more likely to use this feature and use it close to the deadline as they started assignments later. The predominant factors that motivated students to disable the guardrails were assistance in solving problems, time pressure, lack of self-regulation, and curiosity. Our work provides insights into students' solution-seeking motivations and behaviors, which has implications for the design of AI TAs that balance pedagogical goals with student preferences.
Problem

Research questions and friction points this paper is trying to address.

Investigates student reliance on AI TAs with optional guardrails
Examines motivations for disabling guardrails in programming tasks
Analyzes impact of guardrail usage on academic performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI TA with optional guardrails feature
Scaffolded support versus direct solutions
Investigating student help-seeking behaviors
🔎 Similar Papers
No similar papers found.