🤖 AI Summary
This study addresses the technical challenge of real-time human–AI collaboration and adversarial engagement between human pilots and AI air-combat agents in high-fidelity military simulation. We propose a multi-agent reinforcement learning (MARL) framework to train heterogeneous fighter-agent policies exhibiting distinct tactical styles. A lightweight bidirectional communication interface is designed, enabling, for the first time, seamless integration of RL-trained models with the VR-Forces high-fidelity defense simulation platform—supporting millisecond-level data exchange within hybrid human–machine simulation architectures. Experimental evaluation demonstrates that the trained agents achieve high tactical fidelity, rapid response capability, and interpretable behavioral patterns in complex 3D aerial combat scenarios. The approach significantly enhances immersive tactical training efficacy and human–agent collaborative decision-making performance. This work establishes a novel paradigm for intelligent combat simulation and verification of autonomous air-combat systems.
📝 Abstract
We present a system that enables real-time interaction between human users and agents trained to control fighter jets in simulated 3D air combat scenarios. The agents are trained in a dedicated environment using Multi-Agent Reinforcement Learning. A communication link is developed to allow seamless deployment of trained agents into VR-Forces, a widely used defense simulation tool for realistic tactical scenarios. This integration allows mixed simulations where human-controlled entities engage with intelligent agents exhibiting distinct combat behaviors. Our interaction model creates new opportunities for human-agent teaming, immersive training, and the exploration of innovative tactics in defense contexts.