🤖 AI Summary
Human-robot collaborative systems in critical missions (e.g., surveillance, search-and-rescue) suffer from internal threats—compromised unmanned vehicles, adversarial AI agents, or malicious human analysts—that undermine shared situational awareness and cause collaborative failure. Method: This paper introduces, for the first time, an active deception mechanism systematically embedded within the Shared Mental Model (SMM), featuring configurable decoy tasks designed to elicit anomalous behavior for early threat detection. The approach integrates multimodal threat perception, adversarial task scheduling, and a triggerable adaptive recovery framework—including model retraining, system reinstallation, and personnel substitution. Results: Experiments demonstrate sustained ~80% task success rate under high attack rates—eight times higher than baseline methods—significantly enhancing robustness and resilience of human-robot teams in adversarial environments. Core contribution: A novel “security-enhanced Shared Mental Model” paradigm that unifies collaborative efficacy with intrinsic, proactive security defense.
📝 Abstract
We present DASH (Deception-Augmented Shared mental model for Human-machine teaming), a novel framework that enhances mission resilience by embedding proactive deception into Shared Mental Models (SMM). Designed for mission-critical applications such as surveillance and rescue, DASH introduces "bait tasks" to detect insider threats, e.g., compromised Unmanned Ground Vehicles (UGVs), AI agents, or human analysts, before they degrade team performance. Upon detection, tailored recovery mechanisms are activated, including UGV system reinstallation, AI model retraining, or human analyst replacement. In contrast to existing SMM approaches that neglect insider risks, DASH improves both coordination and security. Empirical evaluations across four schemes (DASH, SMM-only, no-SMM, and baseline) show that DASH sustains approximately 80% mission success under high attack rates, eight times higher than the baseline. This work contributes a practical human-AI teaming framework grounded in shared mental models, a deception-based strategy for insider threat detection, and empirical evidence of enhanced robustness under adversarial conditions. DASH establishes a foundation for secure, adaptive human-machine teaming in contested environments.