🤖 AI Summary
This study investigates how structured prompt interfaces can enhance graduate students’ learning outcomes when interacting with large language models (LLMs), such as ChatGPT, in a robotics course. Method: A mixed-methods approach was employed—including system log analysis, pre-/post-intervention surveys, task-based performance scoring, and qualitative interviews—to compare students’ prompting behaviors, learning gains, and attitudinal shifts under structured versus unstructured LLM usage conditions. Contribution/Results: The study empirically demonstrates that a structured interface significantly fosters “code-understanding-oriented” high-quality prompting practices and reveals a positive correlation between such practices and conceptual learning gains. However, no statistically significant improvement in overall course performance was observed, and students exhibited limited sustained adoption, highlighting behavioral transfer limitations. These findings provide empirically grounded, reusable design principles for integrating LLMs into advanced computational thinking instruction.
📝 Abstract
Prior research shows that how students engage with Large Language Models (LLMs) influences their problem-solving and understanding, reinforcing the need to support productive LLM-uses that promote learning. This study evaluates the impact of a structured GPT platform designed to promote 'good' prompting behavior with data from 58 students in a graduate-level robotics course. The students were assigned to either an intervention group using the structured platform or a control group using ChatGPT freely for two practice lab sessions, before a third session where all students could freely use ChatGPT. We analyzed student perception (pre-post surveys), prompting behavior (logs), performance (task scores), and learning (pre-post tests). Although we found no differences in performance or learning between groups, we identified prompting behaviors - such as having clear prompts focused on understanding code - that were linked with higher learning gains and were more prominent when students used the structured platform. However, such behaviors did not transfer once students were no longer constrained to use the structured platform. Qualitative survey data showed mixed perceptions: some students perceived the value of the structured platform, but most did not perceive its relevance and resisted changing their habits. These findings contribute to ongoing efforts to identify effective strategies for integrating LLMs into learning and question the effectiveness of bottom-up approaches that temporarily alter user interfaces to influence students' interaction. Future research could instead explore top-down strategies that address students' motivations and explicitly demonstrate how certain interaction patterns support learning.