DASH: Deception-Augmented Shared Mental Model for a Human-Machine Teaming System

📅 2025-12-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Human-robot collaborative systems in critical missions (e.g., surveillance, search-and-rescue) suffer from internal threats—compromised unmanned vehicles, adversarial AI agents, or malicious human analysts—that undermine shared situational awareness and cause collaborative failure. Method: This paper introduces, for the first time, an active deception mechanism systematically embedded within the Shared Mental Model (SMM), featuring configurable decoy tasks designed to elicit anomalous behavior for early threat detection. The approach integrates multimodal threat perception, adversarial task scheduling, and a triggerable adaptive recovery framework—including model retraining, system reinstallation, and personnel substitution. Results: Experiments demonstrate sustained ~80% task success rate under high attack rates—eight times higher than baseline methods—significantly enhancing robustness and resilience of human-robot teams in adversarial environments. Core contribution: A novel “security-enhanced Shared Mental Model” paradigm that unifies collaborative efficacy with intrinsic, proactive security defense.

Technology Category

Application Category

📝 Abstract
We present DASH (Deception-Augmented Shared mental model for Human-machine teaming), a novel framework that enhances mission resilience by embedding proactive deception into Shared Mental Models (SMM). Designed for mission-critical applications such as surveillance and rescue, DASH introduces "bait tasks" to detect insider threats, e.g., compromised Unmanned Ground Vehicles (UGVs), AI agents, or human analysts, before they degrade team performance. Upon detection, tailored recovery mechanisms are activated, including UGV system reinstallation, AI model retraining, or human analyst replacement. In contrast to existing SMM approaches that neglect insider risks, DASH improves both coordination and security. Empirical evaluations across four schemes (DASH, SMM-only, no-SMM, and baseline) show that DASH sustains approximately 80% mission success under high attack rates, eight times higher than the baseline. This work contributes a practical human-AI teaming framework grounded in shared mental models, a deception-based strategy for insider threat detection, and empirical evidence of enhanced robustness under adversarial conditions. DASH establishes a foundation for secure, adaptive human-machine teaming in contested environments.
Problem

Research questions and friction points this paper is trying to address.

Enhances mission resilience through proactive deception in shared mental models
Detects insider threats like compromised agents before performance degradation
Activates recovery mechanisms to maintain coordination and security under attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proactive deception embedded in shared mental models
Bait tasks detect insider threats before performance degradation
Tailored recovery mechanisms activated upon threat detection
🔎 Similar Papers
No similar papers found.
Z
Zelin Wan
Department of Computer Science, Virginia Tech, Arlington, VA, USA
H
Han Jun Yoon
Department of Computer Science, Virginia Tech, Arlington, VA, USA
N
Nithin Alluru
Department of Computer Science, Virginia Tech, Arlington, VA, USA
T
Terrence J. Moore
US Army DEVCOM Army Research Laboratory, Adelphi, MD, USA
F
Frederica F. Nelson
US Army DEVCOM Army Research Laboratory, Adelphi, MD, USA
Seunghyun Yoon
Seunghyun Yoon
Assistant Professor, Korea Institute of Energy Technology (KENTECH)
Reinforcement LearningDeep LearningData ScienceNetworkingCyber Security
Hyuk Lim
Hyuk Lim
Korea Institute of Energy Technology (KENTECH)
Artificial IntelligenceCyber SecurityData Networking
Dan Dongseong Kim
Dan Dongseong Kim
Deputy Director, UQ Cybersecurity; Associate Professor, The University of Queensland
Security for AIDependabilityMoving Target DefenseSecurity EngineeringSecurity Metrics
Jin-Hee Cho
Jin-Hee Cho
Computer Science Department, Virginia Tech
AI-based cybersecuritydecision making under uncertaintynetwork science