🤖 AI Summary
This study addresses a core challenge in time-critical human supervisory control: balancing the benefits of highlighting critical events against the risk of cognitive interference while enhancing situation awareness. To this end, the authors propose a reinforcement learning–driven adaptive interface optimization method that, for the first time, integrates user gaze behavior modeling with human-in-the-loop simulation to derive personalized highlighting strategies without requiring real-world deployment. Evaluated in a drone delivery supervision scenario, the approach significantly outperforms static rule-based baselines, demonstrating the feasibility and effectiveness of intelligent, dynamic user interfaces in improving supervisory efficiency.
📝 Abstract
Interfaces for human oversight must effectively support users'situation awareness under time-critical conditions. We explore reinforcement learning (RL)-based UI adaptation to personalize alerting strategies that balance the benefits of highlighting critical events against the cognitive costs of interruptions. To enable learning without real-world deployment, we integrate models of users'gaze behavior to simulate attentional dynamics during monitoring. Using a delivery-drone oversight scenario, we present initial results suggesting that RL-based highlighting can outperform static, rule-based approaches and discuss challenges of intelligent oversight support.