A Framework for Controllable Multi-objective Learning with Annealed Stein Variational Hypernetworks

πŸ“… 2025-06-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In multi-objective learning, simultaneously maximizing hypervolume and maintaining diversity within the Pareto set remains a fundamental challenge. Method: This paper proposes a controllable optimization framework based on annealed Stein Variational Gradient Descent (SVGD) for Pareto Set Learning (PSL)β€”the first application of SVGD to PSL. It introduces a diversity-aware gradient direction strategy coupled with a temperature-annealing schedule to jointly enhance convergence, distributional diversity, and training stability. The approach integrates multi-task learning with hypernetwork-based modeling to enable efficient, shared representation learning across objectives. Contribution/Results: Evaluated on multiple standard benchmarks, the method achieves significant improvements in Pareto front coverage and hypervolume, consistently outperforming state-of-the-art approaches while ensuring robust and stable optimization.

Technology Category

Application Category

πŸ“ Abstract
Pareto Set Learning (PSL) is popular as an efficient approach to obtaining the complete optimal solution in Multi-objective Learning (MOL). A set of optimal solutions approximates the Pareto set, and its mapping is a set of dense points in the Pareto front in objective space. However, some current methods face a challenge: how to make the Pareto solution is diverse while maximizing the hypervolume value. In this paper, we propose a novel method to address this challenge, which employs Stein Variational Gradient Descent (SVGD) to approximate the entire Pareto set. SVGD pushes a set of particles towards the Pareto set by applying a form of functional gradient descent, which helps to converge and diversify optimal solutions. Additionally, we employ diverse gradient direction strategies to thoroughly investigate a unified framework for SVGD in multi-objective optimization and adapt this framework with an annealing schedule to promote stability. We introduce our method, SVH-MOL, and validate its effectiveness through extensive experiments on multi-objective problems and multi-task learning, demonstrating its superior performance.
Problem

Research questions and friction points this paper is trying to address.

Diversify Pareto solutions while maximizing hypervolume value
Approximate entire Pareto set using Stein Variational Gradient Descent
Enhance stability with annealing schedule in multi-objective optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses SVGD for Pareto set approximation
Employs diverse gradient direction strategies
Incorporates annealing schedule for stability
πŸ”Ž Similar Papers
No similar papers found.
Minh-Duc Nguyen
Minh-Duc Nguyen
CECS, VinUniversity
AI AgentLLMOptimization
D
Dung D. Le
College of Engineering and Computer Science, VinUniversity, Hanoi, Vietnam