🤖 AI Summary
To address low task success rates among novice drone operators performing concurrent inspection and precise landing—attributed to weak depth perception and unintuitive interfaces—this paper proposes a zero-training-required shared autonomy system. Methodologically, it integrates a lightweight user-intent inference model, a CNN-based visual encoder, and a reinforcement learning policy network, and innovatively unifies real-time red/green light feedback with AR-HUD-based information augmentation within a single framework, enabling seamless simulation-to-reality transfer. Key contributions include: (1) high-fidelity human-drone collaboration enabled by only a simple user model; and (2) no requirement for real-world data fine-tuning. Experiments demonstrate significant improvements: task success rates increase from 16.67%/54.29% to 95.59%/96.22% for inspection/landing, respectively; inspection time decreases by 19.53%; trajectory length shortens by 17.86%; and user preference is highest among evaluated approaches.
📝 Abstract
Multi-task missions for unmanned aerial vehicles (UAVs) involving inspection and landing tasks are challenging for novice pilots due to the difficulties associated with depth perception and the control interface. We propose a shared autonomy system, alongside supplementary information displays, to assist pilots to successfully complete multi-task missions without any pilot training. Our approach comprises of three modules: (1) a perception module that encodes visual information onto a latent representation, (2) a policy module that augments pilot’s actions, and (3) an information augmentation module that provides additional information to the pilot. The policy module is trained in simulation with simulated users and transferred to the real world without modification in a user study (
(mathbf{n=29})
), alongside alternative supplementary information schemes including learnt red/green light feedback cues and an augmented reality display. The pilot’s intent is unknown to the policy module and is inferred from the pilot’s input and UAV’s states. The assistant increased task success rate for the landing and inspection tasks from [16.67% & 54.29%] respectively to [95.59% & 96.22%]. With the assistant, inexperienced pilots achieved similar performance to experienced pilots. Red/green light feedback cues reduced the required time by 19.53% and trajectory length by 17.86% for the inspection task, where participants rated it as their preferred condition due to the intuitive interface and providing reassurance. This work demonstrates that simple user models can train shared autonomy systems in simulation, and transfer to physical tasks to estimate user intent and provide effective assistance and information to the pilot.