🤖 AI Summary
This study investigates whether embedding LLM-driven reflective prompts in interactive AI-generated educational podcasts enhances learning outcomes and user experience. Method: A randomized controlled trial compared two podcast conditions—those incorporating LLM-guided reflective prompts versus those without—across measures of knowledge retention, transfer performance, and subjective user experience. Contribution/Results: No statistically significant differences were observed in learning outcomes between conditions. However, participants rated the prompt-integrated podcasts as significantly less engaging, indicating suboptimal prompt interaction design. This work represents the first systematic integration of native LLM-based reflection mechanisms into audio-based educational contexts. It reveals distinctive challenges of “prompt-as-intervention” in auditory learning environments and provides empirical evidence and methodological insights for designing context-appropriate reflective interactions in AI-mediated educational content.
📝 Abstract
This study examined whether embedding LLM-guided reflection prompts in an interactive AI-generated podcast improved learning and user experience compared to a version without prompts. Thirty-six undergraduates participated, and while learning outcomes were similar across conditions, reflection prompts reduced perceived attractiveness, highlighting a call for more research on reflective interactivity design.