🤖 AI Summary
This paper addresses core challenges hindering human-machine teaming (HMT) in critical domains—including defense, healthcare, and autonomous systems—namely, trust deficits, rigid role allocation, and inadequate evaluation frameworks. Methodologically, it introduces a unified theoretical framework integrating computational science and social science: (i) a novel four-dimensional HMT taxonomy; (ii) breakthrough mechanisms for dynamic trust calibration, ethics-aligned adaptive role assignment, and multimodal explainable interaction; and (iii) the first scalable, real-world-oriented benchmarking paradigm for HMT. The study yields a comprehensive HMT capability map spanning 12 operational scenarios, systematically identifies seven cross-cutting challenges, and proposes three actionable research pathways. Collectively, these contributions advance the systematic development of resilient, trustworthy, and scalable human-machine collaborative systems.
📝 Abstract
Human-Machine Teaming (HMT) is revolutionizing collaboration across domains such as defense, healthcare, and autonomous systems by integrating AI-driven decision-making, trust calibration, and adaptive teaming. This survey presents a comprehensive taxonomy of HMT, analyzing theoretical models, including reinforcement learning, instance-based learning, and interdependence theory, alongside interdisciplinary methodologies. Unlike prior reviews, we examine team cognition, ethical AI, multi-modal interactions, and real-world evaluation frameworks. Key challenges include explainability, role allocation, and scalable benchmarking. We propose future research in cross-domain adaptation, trust-aware AI, and standardized testbeds. By bridging computational and social sciences, this work lays a foundation for resilient, ethical, and scalable HMT systems.