🤖 AI Summary
This work addresses the time-allocation problem in multifunctional cognitive radar systems operating in dynamic environments, where simultaneous optimization of new-target search and known-target tracking is required. We propose a Pareto-optimal scheduling framework based on multi-objective deep reinforcement learning (MO-DRL). Methodologically, we integrate NSGA-II to estimate the upper bound of the Pareto front for modeling multi-objective trade-offs, and employ both DDPG and SAC to learn adaptive time-allocation policies. Experimental results demonstrate that the proposed framework significantly enhances environmental adaptability; SAC outperforms DDPG in policy stability and sample efficiency, validating the effectiveness and advancement of MO-DRL for radar resource scheduling. This study establishes a scalable optimization paradigm for intelligent temporal decision-making in cognitive radar systems.
📝 Abstract
The time allocation problem in multi-function cognitive radar systems focuses on the trade-off between scanning for newly emerging targets and tracking the previously detected targets. We formulate this as a multi-objective optimization problem and employ deep reinforcement learning to find Pareto-optimal solutions and compare deep deterministic policy gradient (DDPG) and soft actor-critic (SAC) algorithms. Our results demonstrate the effectiveness of both algorithms in adapting to various scenarios, with SAC showing improved stability and sample efficiency compared to DDPG. We further employ the NSGA-II algorithm to estimate an upper bound on the Pareto front of the considered problem. This work contributes to the development of more efficient and adaptive cognitive radar systems capable of balancing multiple competing objectives in dynamic environments.