🤖 AI Summary
This study addresses the vulnerability of traditional written exams to answer generation by large language models (LLMs) and the limited scalability of oral assessments, which better evaluate genuine understanding. To bridge this gap, the authors propose a low-cost, scalable, and personalized AI-powered oral examination system that leverages speech-based interaction. The system dynamically generates questions, employs multiple LLMs for collaborative scoring, and incorporates a rule-based adjudication mechanism to ensure structured evaluation. It reduces the cost of large-scale oral exams to under $0.50 per student and openly shares exam structures to support “practice-as-learning.” Evaluated with 36 undergraduate students, the system achieved high inter-rater reliability (Krippendorff’s α = 0.86); while 70% of participants affirmed its effectiveness in assessing comprehension, 83% reported higher anxiety compared to written exams.
📝 Abstract
Large language models have broken take-home exams. Students generate polished work they cannot explain under follow-up questioning. Oral examinations are a natural countermeasure -- they require real-time reasoning and cannot be outsourced to an LLM -- but they have never scaled. Voice AI changes this. We describe a system that conducted 36 oral examinations for an undergraduate AI/ML course at a total cost of \$15 (\$0.42 per student), low enough to attach oral comprehension checks to every assignment rather than reserving them for high-stakes finals. Because the LLM generates questions dynamically from a rubric, the entire examination structure can be shared in advance: practice is learning, and there is no exam to leak. A multi-agent architecture decomposes each examination into structured phases, and a council of three LLM families grades each transcript through a deliberation round in which models revise scores after reviewing peer evidence, achieving inter-rater reliability (Krippendorff's $α$ = 0.86) above conventional thresholds. But the system also broke in instructive ways: the agent stacked questions despite explicit prohibitions, could not randomize case selection, and a cloned professorial voice was perceived as aggressive rather than familiar. The recurring lesson is that behavioral constraints on LLMs must be enforced through architecture, not prompting alone. Students largely agreed the format tested genuine understanding (70%), yet found it more stressful than written exams (83%) -- unsurprising given that 83% had never taken any oral examination. We document the full design, failure modes, and student experience, and include all prompts as appendices.