π€ AI Summary
This study addresses the lack of systematic tools for jointly characterizing studentsβ engagement patterns in seeking help and using generative AI, which hinders fine-grained analysis of AI dependency behaviors. To bridge this gap, we propose RelianceScope, a novel framework that integrates help-seeking and AI-response utilization into a unified model, defining nine distinct dependency types. By incorporating knowledge component importance as a contextual lens, RelianceScope enables systematic interpretation of dependency behaviors within open-ended human-AI interactions. Applying log analysis, manual annotation, and large language model (LLM)-based automated recognition, we multidimensionally encode and detect dependency patterns in the chat and coding activities of 79 undergraduate students enrolled in a web programming course. Results reveal that proactive help-seeking is often accompanied by active use of AI feedback, yet dependency patterns show no significant variation across different knowledge levels. The study also validates the reliability of LLMs in identifying such dependency behaviors.
π Abstract
Generative AI chatbots enable personalized problem-solving, but effective learning requires students to self-regulate both how they seek help and how they use AI-generated responses. Considering engagement modes across these two actions reveals nuanced reliance patterns: for example, a student may actively engage in help-seeking by clearly specifying areas of need, yet engage passively in response-use by copying AI outputs, or vice versa. However, existing research lacks systematic tools for jointly capturing engagement across help-seeking and response-use, limiting the analysis of such reliance behaviors. We introduce RelianceScope, an analytical framework that characterizes students' reliance on chatbots during problem-solving. RelianceScope (1) operationalizes reliance into nine patterns based on combinations of engagement modes in help-seeking and response-use, and (2) situates these patterns within a knowledge-context lens that accounts for students' prior knowledge and the instructional significance of knowledge components. Rather than prescribing optimal AI use, the framework enables fine-grained analysis of reliance in open-ended student-AI interactions. As an illustrative application, we applied RelianceScope to analyze chat and code-edit logs from 79 college students in a web programming course. Results show that active help-seeking is associated with active response-use, whereas reliance patterns remain similar across knowledge mastery levels. Students often struggled to articulate their knowledge gaps and to adapt AI responses. Using our annotated dataset as a benchmark, we further demonstrate that large language models can reliably detect reliance during help-seeking and response-use. We conclude by discussing the implications of RelianceScope and the design guidelines for AI-supported educational systems.