🤖 AI Summary
Estimating the Pareto front is challenging in high-dimensional sparse settings with costly annotations and conflicting objectives, leading to poor multi-distribution generalization and suboptimal fairness–risk trade-offs. Method: We propose the first two-stage multi-objective learning (MOL) framework that explicitly exploits intrinsic low-dimensional data structure. Grounded in theoretical analysis revealing the failure mechanism of standard regularization in MOL, our approach integrates structured constraints (sparsity/low-rank), a differentiable Pareto-front approximation, and multi-task gradient coordination. Contribution/Results: Experiments demonstrate significant improvements in Pareto solution quality, out-of-distribution generalization, and fairness–risk trade-off accuracy. This work provides the first systematic empirical validation that low-dimensional regularization—long established in single-objective learning—can be effectively transferred to the multi-objective learning paradigm, establishing a theoretically grounded, structure-aware foundation for scalable and robust MOL.
📝 Abstract
Simultaneously addressing multiple objectives is becoming increasingly important in modern machine learning. At the same time, data is often high-dimensional and costly to label. For a single objective such as prediction risk, conventional regularization techniques are known to improve generalization when the data exhibits low-dimensional structure like sparsity. However, it is largely unexplored how to leverage this structure in the context of multi-objective learning (MOL) with multiple competing objectives. In this work, we discuss how the application of vanilla regularization approaches can fail, and propose a two-stage MOL framework that can successfully leverage low-dimensional structure. We demonstrate its effectiveness experimentally for multi-distribution learning and fairness-risk trade-offs.