🤖 AI Summary
To address insufficient solution-set diversity and the difficulty of jointly modeling complex contextual dependencies and diverse preference requirements in neural multi-objective combinatorial optimization (MOCO), this paper proposes the Context-aware Diversity Enhancement (CDE) framework. CDE introduces a novel two-tier context-aware architecture that synergistically combines node-level autoregressive modeling with solution-level expected hypervolume maximization. It further designs a hypervolume residual update strategy, enabling the Pareto attention model to jointly capture local and global Pareto front structures. The method integrates conditional sequence modeling, Pareto attention, and reinforcement learning with rollout evaluation. Evaluated on three canonical MOCO benchmarks, CDE achieves significant improvements in both diversity and coverage of the Pareto front, consistently outperforming state-of-the-art methods. Moreover, it exhibits superior training efficiency and generalization capability.
📝 Abstract
Multi-objective combinatorial optimization (MOCO) problems are prevalent in various real-world applications. Most existing neural MOCO methods rely on problem decomposition to transform an MOCO problem into a series of singe-objective combinatorial optimization (SOCO) problems and train attention models based on a single-step and deterministic greedy rollout. However, inappropriate decomposition and undesirable short-sighted behaviors of previous methods tend to induce a decline in diversity. To address the above limitation, we design a Context-aware Diversity Enhancement algorithm named CDE, which casts the neural MOCO problems as conditional sequence modeling via autoregression (node-level context awareness) and establishes a direct relationship between the mapping of preferences and diversity indicator of reward based on hypervolume expectation maximization (solution-level context awareness). Based on the solution-level context awareness, we further propose a hypervolume residual update strategy to enable the Pareto attention model to capture both local and non-local information of the Pareto set/front. The proposed CDE can effectively and efficiently grasp the context information, resulting in diversity enhancement. Experimental results on three classic MOCO problems demonstrate that our CDE outperforms several state-of-the-art baselines.