🤖 AI Summary
This study addresses the limited granular understanding of how undergraduate students leverage AI chatbots to support academic reading and the underlying cognitive engagement mechanisms. Through an eight-week longitudinal user study involving 15 undergraduates, we analyzed 838 prompts generated during course-related reading tasks. Combining content coding of prompts—categorized into decoding, comprehension, reasoning, and metacognition—with qualitative analysis, we identify a prevailing paradigm wherein students tend to “read through AI” rather than “read with AI.” Comprehension-oriented prompts dominated (59.6%), yet most interactions met only minimal prompting thresholds, frequently truncating deeper cognitive processes. Moreover, individual prompting strategies and patterns of cognitive engagement exhibited long-term stability, revealing a pronounced efficiency orientation and a significant gap between users’ intentions and actual behaviors.
📝 Abstract
College students increasingly use AI chatbots to support academic reading, yet we lack granular understanding of how these interactions shape their reading experience and cognitive engagement. We conducted an eight-week longitudinal study with 15 undergraduates who used AI to support assigned readings in a course. We collected 838 prompts across 239 reading sessions and developed a coding schema categorizing prompts into four cognitive themes: Decoding, Comprehension, Reasoning, and Metacognition. Comprehension prompts dominated (59.6%), with Reasoning (29.8%), Metacognition (8.5%), and Decoding (2.1%) less frequent. Most sessions (72%) contained exactly three prompts, the required minimum of the reading assignment. Within sessions, students showed natural cognitive progression from comprehension toward reasoning, but this progression was truncated. Across eight weeks, students'engagement patterns remained stable, with substantial individual differences persisting throughout. Qualitative analysis revealed an intention-behavior gap: students recognized that effective prompting required effort but rarely applied this knowledge, with efficiency emerging as the primary driver. Students also strategically triaged their engagement based on interest and academic pressures, exhibiting a novel pattern of reading through AI rather than with it: using AI-generated summaries as primary material to filter which sections merited deeper attention. We discuss design implications for AI reading systems that scaffold sustained cognitive engagement.