๐ค AI Summary
This study addresses the lack of industry-compliant evaluation methodologies in existing AI research for air traffic control (ATC) tasks, which often fail to reflect real-world operational environments. To bridge this gap, the work introducesโ for the first timeโthe legally mandated ATC training assessment framework into AI agent testing. It proposes a human-in-the-loop evaluation paradigm grounded in regulatory-certified simulator curricula, wherein domain-expert instructors conduct contextually accurate assessments of AI agent performance. This approach aligns AI capabilities with established human professional standards, substantially narrowing the divide between academic research and actual ATC operations, and lays a foundational framework for future human-AI collaborative air traffic management systems.
๐ Abstract
We present a rigorous, human-in-the-loop evaluation framework for assessing the performance of AI agents on the task of Air Traffic Control, grounded in a regulator-certified simulator-based curriculum used for training and testing real-world trainee controllers. By leveraging legally regulated assessments and involving expert human instructors in the evaluation process, our framework enables a more authentic and domain-accurate measurement of AI performance. This work addresses a critical gap in the existing literature: the frequent misalignment between academic representations of Air Traffic Control and the complexities of the actual operational environment. It also lays the foundations for effective future human-machine teaming paradigms by aligning machine performance with human assessment targets.