Scalable Classification of Course Information Sheets Using Large Language Models: A Reusable Institutional Method for Academic Quality Assurance

πŸ“… 2026-03-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the urgent need for higher education institutions to systematically evaluate the risk of generative artificial intelligence (GenAI) misuse in assessment components of course design. It proposes the first reusable and transferable institutional framework for large language model (LLM) deployment, implementing an end-to-end four-stage pipeline that automatically scans thousands of course information sheets, classifies them into three risk categories (high, potential, low), validates results, and disseminates feedback. Through multi-model comparison, iterative prompt engineering, and an automated reporting system, the framework achieved 87% agreement with expert annotations after five optimization rounds. Initial screening identified 60.3% of courses as high-risk; a follow-up scan the following year revealed a significant reduction in risk among practice-oriented courses, demonstrating the framework’s effectiveness and scalability.

Technology Category

Application Category

πŸ“ Abstract
Purpose: Higher education institutions face increasing pressure to audit course designs for generative AI (GenAI) integration. This paper presents an end-to-end method for using large language models (LLMs) to scan course information sheets at scale, identify where assessments may be vulnerable to student use of GenAI tools, validate system performance through iterative refinement, and operationalise results through direct stakeholder communication and effort. Method: We developed a four-phase pipeline: (0) manual pilot sampling, (1) iterative prompt engineering with multi-model comparison, (2) full production scan of 4,684 Bachelor and Master course information sheets (Academic Year 2024-2025) from the Vrije Universiteit Brussel (VUB) with automated report generation and email distribution to teaching teams (91.4% address-matched) using a three-tier risk taxonomy (Clear risk, Potential risk, Low risk), and (3) longitudinal re-scan of 4,675 sheets after the next catalogue release. Results: Five iterations of prompt refinement achieved 87% agreement with expert labels. GPT-4o was selected for production based on superior handling of ambiguous cases involving internships and practical components. The Year 1 scan classified 60.3% of courses as Clear risk, 15.2% as Potential risk, and 24.5% as Low risk. Year 2 comparison revealed substantial shifts in risk distributions, with improvements most pronounced in practice-oriented programmes. Implications: The method enables institutions to rapidly transform heterogeneous catalogue data into structured and actionable intelligence. The approach is transferable to other audit domains (sustainability, accessibility, pedagogical alignment) and provides a template for responsible LLM deployment in higher education governance.
Problem

Research questions and friction points this paper is trying to address.

generative AI
academic quality assurance
course information sheets
assessment vulnerability
higher education governance
Innovation

Methods, ideas, or system contributions that make the work stand out.

large language models
generative AI
academic quality assurance
prompt engineering
risk classification
πŸ”Ž Similar Papers
No similar papers found.
B
Brecht Verbeken
Vrije Universiteit Brussel, Pleinlaan 5, 1050 Brussel, Belgium
J
Joke Van den Broeck
Vrije Universiteit Brussel, Pleinlaan 5, 1050 Brussel, Belgium
I
Inge De Cleyn
Vrije Universiteit Brussel, Pleinlaan 5, 1050 Brussel, Belgium
S
Steven Van Luchene
Vrije Universiteit Brussel, Pleinlaan 5, 1050 Brussel, Belgium
N
Nadine Engels
Vrije Universiteit Brussel, Pleinlaan 5, 1050 Brussel, Belgium
A
Andres Algaba
Vrije Universiteit Brussel, Pleinlaan 5, 1050 Brussel, Belgium
Vincent Ginis
Vincent Ginis
Vrije Universiteit Brussel / Harvard University
Physics | Machine Learning