Coordinated Strategies in Realistic Air Combat by Hierarchical Multi-Agent Reinforcement Learning

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In realistic air combat, incomplete situational awareness and highly nonlinear flight dynamics pose significant challenges to multi-agent cooperative decision-making. Method: This paper establishes a 3D multi-agent air combat environment and proposes a hierarchical multi-agent reinforcement learning (MARL) framework: a high-level policy generates discrete tactical commands, while a low-level policy executes continuous dynamical control. To address training difficulty, the framework innovatively integrates heterogeneous agent modeling, curriculum learning, and league-based training. Contribution/Results: Experiments demonstrate substantial improvements in both training efficiency and adversarial performance. Specifically, the proposed method achieves superior multi-target cooperative strike success rates and agent survivability compared to baseline approaches. It provides a scalable technical pathway for autonomous, cooperative decision-making in complex, dynamic environments.

Technology Category

Application Category

📝 Abstract
Achieving mission objectives in a realistic simulation of aerial combat is highly challenging due to imperfect situational awareness and nonlinear flight dynamics. In this work, we introduce a novel 3D multi-agent air combat environment and a Hierarchical Multi-Agent Reinforcement Learning framework to tackle these challenges. Our approach combines heterogeneous agent dynamics, curriculum learning, league-play, and a newly adapted training algorithm. To this end, the decision-making process is organized into two abstraction levels: low-level policies learn precise control maneuvers, while high-level policies issue tactical commands based on mission objectives. Empirical results show that our hierarchical approach improves both learning efficiency and combat performance in complex dogfight scenarios.
Problem

Research questions and friction points this paper is trying to address.

Addresses imperfect situational awareness in air combat
Tackles nonlinear flight dynamics with hierarchical control
Enhances multi-agent coordination for mission objectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical reinforcement learning for air combat
Two-level policy structure for tactical commands
Curriculum learning with league-play training algorithm
🔎 Similar Papers
No similar papers found.