🤖 AI Summary
This study investigates business students’ ability to detect hallucinations in AI-generated content under high-stakes assessment conditions, and the underlying psychological and cognitive mechanisms. Method: A mixed-methods design integrates validated educational psychology scales (e.g., AI skepticism, epistemic cognition) with authentic detection tasks; analyses employ logistic regression and mediation modeling. Contribution/Results: The study pioneers an AI literacy–driven evaluation framework that synthesizes epistemic cognition, cognitive bias theory, and transfer learning principles, augmented by a structured feedback intervention. Findings reveal only 20% of students accurately identify hallucinations; key predictors include academic performance, explanatory reasoning, writing proficiency, and AI skepticism. Structured feedback significantly enhances detection consistency and cross-contextual transferability. This work advances theoretical and empirical foundations for assessing critical digital literacy in AI-augmented educational environments.
📝 Abstract
As artificial intelligence (AI) becomes integral to the society, the ability to critically evaluate AI-generated content is increasingly vital. On the context of management education, we examine how academic skills, cognitive traits, and AI scepticism influence students' ability to detect factually incorrect AI-generated responses (hallucinations) in a high-stakes assessment at a UK business school (n=211, Year 2 economics and management students). We find that only 20% successfully identified the hallucination, with strong academic performance, interpretive skills thinking, writing proficiency, and AI scepticism emerging as key predictors. In contrast, rote knowledge application proved less effective, and gender differences in detection ability were observed. Beyond identifying predictors of AI hallucination detection, we tie the theories of epistemic cognition, cognitive bias, and transfer of learning with new empirical evidence by demonstrating how AI literacy could enhance long-term analytical performance in high-stakes settings. We advocate for an innovative and practical framework for AI-integrated assessments, showing that structured feedback mitigates initial disparities in detection ability. These findings provide actionable insights for educators designing AI-aware curricula that foster critical reasoning, epistemic vigilance, and responsible AI engagement in management education. Our study contributes to the broader discussion on the evolution of knowledge evaluation in AI-enhanced learning environments.