🤖 AI Summary
Oral examination anxiety significantly impairs undergraduate cognitive performance and academic development. This paper proposes a novel oral examination training system integrating Extended Reality (XR), Embodied Conversational Agents (ECAs), and real-time Large Language Models (LLMs). The system delivers a highly immersive, repeatable virtual examiner environment that ensures emotional safety, provides personalized feedback, and supports adaptive spoken responses. Its core innovation lies in the tight coupling of ECAs and LLMs: examiners exhibit human-like multimodal behaviors while dynamically generating domain-accurate, assessment-aligned follow-up questions and feedback grounded in students’ real-time verbal responses. A prototype implementation demonstrates feasibility in reducing anxiety and enhancing self-efficacy. The work establishes a scalable, empirically grounded paradigm for educational psychological intervention and contributes a foundational technical architecture and design principles for XR–AI–enabled pedagogical applications.
📝 Abstract
Oral examinations are a prevalent but psychologically demanding form of assessment in higher education. Many students experience intense anxiety, which can impair cognitive performance and hinder academic success. This position paper explores the potential of embodied conversational agents (ECAs) in extended reality (XR) environments to support students preparing for oral exams. We propose a system concept that integrates photorealistic ECAs with real-time capable large language models (LLMs) to enable psychologically safe, adaptive, and repeatable rehearsal of oral examination scenarios. We also discuss the potential benefits and challenges of such an envisioned system.