π€ AI Summary
This study addresses the urgent need for higher education institutions to systematically evaluate the risk of generative artificial intelligence (GenAI) misuse in assessment components of course design. It proposes the first reusable and transferable institutional framework for large language model (LLM) deployment, implementing an end-to-end four-stage pipeline that automatically scans thousands of course information sheets, classifies them into three risk categories (high, potential, low), validates results, and disseminates feedback. Through multi-model comparison, iterative prompt engineering, and an automated reporting system, the framework achieved 87% agreement with expert annotations after five optimization rounds. Initial screening identified 60.3% of courses as high-risk; a follow-up scan the following year revealed a significant reduction in risk among practice-oriented courses, demonstrating the frameworkβs effectiveness and scalability.
π Abstract
Purpose: Higher education institutions face increasing pressure to audit course designs for generative AI (GenAI) integration. This paper presents an end-to-end method for using large language models (LLMs) to scan course information sheets at scale, identify where assessments may be vulnerable to student use of GenAI tools, validate system performance through iterative refinement, and operationalise results through direct stakeholder communication and effort.
Method: We developed a four-phase pipeline: (0) manual pilot sampling, (1) iterative prompt engineering with multi-model comparison, (2) full production scan of 4,684 Bachelor and Master course information sheets (Academic Year 2024-2025) from the Vrije Universiteit Brussel (VUB) with automated report generation and email distribution to teaching teams (91.4% address-matched) using a three-tier risk taxonomy (Clear risk, Potential risk, Low risk), and (3) longitudinal re-scan of 4,675 sheets after the next catalogue release.
Results: Five iterations of prompt refinement achieved 87% agreement with expert labels. GPT-4o was selected for production based on superior handling of ambiguous cases involving internships and practical components. The Year 1 scan classified 60.3% of courses as Clear risk, 15.2% as Potential risk, and 24.5% as Low risk. Year 2 comparison revealed substantial shifts in risk distributions, with improvements most pronounced in practice-oriented programmes.
Implications: The method enables institutions to rapidly transform heterogeneous catalogue data into structured and actionable intelligence. The approach is transferable to other audit domains (sustainability, accessibility, pedagogical alignment) and provides a template for responsible LLM deployment in higher education governance.