"Everyone's using it, but no one is allowed to talk about it": College Students'Experiences Navigating the Higher Education Environment in a Generative AI World

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the growing misalignment between the widespread adoption of generative AI in higher education and existing institutional policies, which has led to covert student usage, policy ineffectiveness, and compromised learning outcomes. Drawing on semi-structured interviews with 23 undergraduate students and employing qualitative thematic analysis through a social-ecological lens, the research reconceptualizes student AI use as a “situated practice.” It reveals that informal peer norms exert far greater influence on behavior than formal institutional guidelines and introduces the concept of “AI shame” to explain the underground nature of usage. Findings indicate that institutional pressures often compel students to use AI against their pedagogical preferences, while current policies are widely perceived as vague and ineffective. Despite students’ willingness for self-regulation, they struggle to enact it—offering critical insights for educators and AI tool designers seeking to align policy with practice.

Technology Category

Application Category

📝 Abstract
Higher education students are increasingly using generative AI in their academic work. However, existing institutional practices have not yet adapted to this shift. Through semi-structured interviews with 23 college students, our study examines the environmental and social factors that influence students'use of AI. Findings show that institutional pressure factors like deadlines, exam cycles, and grading lead students to engage with AI even when they think it undermines their learning. Social influences, particularly peer micro-communities, establish de-facto AI norms regardless of official AI policies. Campus-wide ``AI shame''is prevalent, often pushing AI use underground. Current institutional AI policies are perceived as generic, inconsistent, and confusing, resulting in routine noncompliance. Additionally, students develop value-based self-regulation strategies, but environmental pressures create a gap between students'intentions and their behaviors. Our findings show student AI use to be a situated practice, and we discuss implications for institutions, instructors, and system tool designers to effectively support student learning with AI.
Problem

Research questions and friction points this paper is trying to address.

generative AI
higher education
AI policy
student behavior
academic integrity
Innovation

Methods, ideas, or system contributions that make the work stand out.

generative AI
student agency
institutional policy
AI shame
situated practice
🔎 Similar Papers
No similar papers found.
Y
Yue Fu
Information School, University of Washington, US
Yifan Lin
Yifan Lin
Google
Machine LearningDistributed systemsAdvertising
Y
Yessica Wang
Information School, University of Washington, US
S
Sarah Tran
Information School, University of Washington, US
Alexis Hiniker
Alexis Hiniker
Associate Professor, University of Washington
Human-Computer Interaction