Listening with Language Models: Using LLMs to Collect and Interpret Classroom Feedback

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional end-of-term surveys suffer from delayed, superficial, and low-actionability feedback. This paper introduces an LLM-driven conversational classroom feedback system comprising three modules—PromptDesigner, FeedbackCollector, and FeedbackAnalyzer—evaluated empirically in two graduate courses at UCSC. The system enables mid-semester, dynamic, structured, and reflective dialogue between instructors and students, overcoming the limitations of static surveys. Its key contributions are: (1) the first deep integration of LLMs into a closed-loop pedagogical feedback cycle; and (2) the use of prompt engineering and automated analytical pipelines to enhance feedback contextualization, depth, and instructional utility. Results demonstrate significantly improved student engagement and feedback quality; instructors report high adaptability and concrete pedagogical guidance; and students strongly prefer open-ended conversational formats. This work establishes a scalable, high-fidelity paradigm for data-informed teaching improvement.

Technology Category

Application Category

📝 Abstract
Traditional end-of-quarter surveys often fail to provide instructors with timely, detailed, and actionable feedback about their teaching. In this paper, we explore how Large Language Model (LLM)-powered chatbots can reimagine the classroom feedback process by engaging students in reflective, conversational dialogues. Through the design and deployment of a three-part system-PromptDesigner, FeedbackCollector, and FeedbackAnalyzer-we conducted a pilot study across two graduate courses at UC Santa Cruz. Our findings suggest that LLM-based feedback systems offer richer insights, greater contextual relevance, and higher engagement compared to standard survey tools. Instructors valued the system's adaptability, specificity, and ability to support mid-course adjustments, while students appreciated the conversational format and opportunity for elaboration. We conclude by discussing the design implications of using AI to facilitate more meaningful and responsive feedback in higher education.
Problem

Research questions and friction points this paper is trying to address.

Collecting timely classroom feedback using LLM chatbots
Interpreting student feedback through conversational AI dialogues
Replacing traditional end-of-quarter surveys with adaptive systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-powered chatbots for conversational feedback
Three-part system design for feedback collection
Real-time analysis enabling mid-course teaching adjustments
🔎 Similar Papers
No similar papers found.