Trainee Action Recognition through Interaction Analysis in CCATT Mixed-Reality Training

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current mixed-reality (MR) training for Critical Care Air Transport Teams (CCATT) relies heavily on subjective expert evaluation and labor-intensive manual annotation, hindering objective, scalable, and quantitative assessment of team coordination and performance. Method: We propose an automated evaluation framework integrating Cognitive Task Analysis (CTA) with multimodal learning. It features a domain-specific hierarchical cognitive model and a Cascade Disentangling Network–based visual action recognition pipeline for fine-grained, multi-person human–object interaction detection and temporal tracking. Multimodal data fusion enables automatic extraction of interpretable performance metrics—including reaction time and task duration—and their mapping to the CCATT operational model. Results: The framework significantly improves assessment objectivity, inter-rater consistency, and reproducibility. It delivers a scalable, empirically verifiable intelligent evaluation paradigm for high-stakes medical team training under stress.

Technology Category

Application Category

📝 Abstract
This study examines how Critical Care Air Transport Team (CCATT) members are trained using mixed-reality simulations that replicate the high-pressure conditions of aeromedical evacuation. Each team - a physician, nurse, and respiratory therapist - must stabilize severely injured soldiers by managing ventilators, IV pumps, and suction devices during flight. Proficient performance requires clinical expertise and cognitive skills, such as situational awareness, rapid decision-making, effective communication, and coordinated task management, all of which must be maintained under stress. Recent advances in simulation and multimodal data analytics enable more objective and comprehensive performance evaluation. In contrast, traditional instructor-led assessments are subjective and may overlook critical events, thereby limiting generalizability and consistency. However, AI-based automated and more objective evaluation metrics still demand human input to train the AI algorithms to assess complex team dynamics in the presence of environmental noise and the need for accurate re-identification in multi-person tracking. To address these challenges, we introduce a systematic, data-driven assessment framework that combines Cognitive Task Analysis (CTA) with Multimodal Learning Analytics (MMLA). We have developed a domain-specific CTA model for CCATT training and a vision-based action recognition pipeline using a fine-tuned Human-Object Interaction model, the Cascade Disentangling Network (CDN), to detect and track trainee-equipment interactions over time. These interactions automatically yield performance indicators (e.g., reaction time, task duration), which are mapped onto a hierarchical CTA model tailored to CCATT operations, enabling interpretable, domain-relevant performance evaluations.
Problem

Research questions and friction points this paper is trying to address.

Developing objective evaluation metrics for CCATT team training simulations
Addressing limitations of subjective instructor assessments in medical training
Automating recognition of trainee-equipment interactions in noisy environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cognitive Task Analysis model for CCATT training
Vision-based action recognition using fine-tuned CDN
Hierarchical performance mapping for interpretable evaluation
🔎 Similar Papers
No similar papers found.