๐ค AI Summary
To address poor generalizability and limited scalability of single-policy approaches in multi-objective reinforcement learning, this paper proposes a two-stage meta-trainingโfine-tuning framework to enhance joint optimization efficiency across multiple tasks. The core contribution lies in constructing a task affinity matrix leveraging the first-order approximation property of policy networks, followed by loss-driven spectral clustering for semantic-aware task grouping (with *k* โช *n*), replacing suboptimal global policies or heuristic groupings. The method comprises multi-task meta-training, lightweight gradient-estimation-based fine-tuning, and Hessian-trace-guided generalization error analysis. Evaluated on Meta-World and robotic control benchmarks, it achieves an average performance gain of 16% and up to 26ร faster inference. Ablation studies confirm the significant contribution of each component.
๐ Abstract
We study the problem of efficiently estimating policies that simultaneously optimize multiple objectives in reinforcement learning (RL). Given $n$ objectives (or tasks), we seek the optimal partition of these objectives into $k ll n$ groups, where each group comprises related objectives that can be trained together. This problem arises in applications such as robotics, control, and preference optimization in language models, where learning a single policy for all $n$ objectives is suboptimal as $n$ grows. We introduce a two-stage procedure -- meta-training followed by fine-tuning -- to address this problem. We first learn a meta-policy for all objectives using multitask learning. Then, we adapt the meta-policy to multiple randomly sampled subsets of objectives. The adaptation step leverages a first-order approximation property of well-trained policy networks, which is empirically verified to be accurate within a $2%$ error margin across various RL environments. The resulting algorithm, PolicyGradEx, efficiently estimates an aggregate task-affinity score matrix given a policy evaluation algorithm. Based on the estimated affinity score matrix, we cluster the $n$ objectives into $k$ groups by maximizing the intra-cluster affinity scores. Experiments on three robotic control and the Meta-World benchmarks demonstrate that our approach outperforms state-of-the-art baselines by $16%$ on average, while delivering up to $26 imes$ faster speedup relative to performing full training to obtain the clusters. Ablation studies validate each component of our approach. For instance, compared with random grouping and gradient-similarity-based grouping, our loss-based clustering yields an improvement of $19%$. Finally, we analyze the generalization error of policy networks by measuring the Hessian trace of the loss surface, which gives non-vacuous measures relative to the observed generalization errors.