Scalable Multi-Objective and Meta Reinforcement Learning via Gradient Estimation

๐Ÿ“… 2025-11-16
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address poor generalizability and limited scalability of single-policy approaches in multi-objective reinforcement learning, this paper proposes a two-stage meta-trainingโ€“fine-tuning framework to enhance joint optimization efficiency across multiple tasks. The core contribution lies in constructing a task affinity matrix leveraging the first-order approximation property of policy networks, followed by loss-driven spectral clustering for semantic-aware task grouping (with *k* โ‰ช *n*), replacing suboptimal global policies or heuristic groupings. The method comprises multi-task meta-training, lightweight gradient-estimation-based fine-tuning, and Hessian-trace-guided generalization error analysis. Evaluated on Meta-World and robotic control benchmarks, it achieves an average performance gain of 16% and up to 26ร— faster inference. Ablation studies confirm the significant contribution of each component.

Technology Category

Application Category

๐Ÿ“ Abstract
We study the problem of efficiently estimating policies that simultaneously optimize multiple objectives in reinforcement learning (RL). Given $n$ objectives (or tasks), we seek the optimal partition of these objectives into $k ll n$ groups, where each group comprises related objectives that can be trained together. This problem arises in applications such as robotics, control, and preference optimization in language models, where learning a single policy for all $n$ objectives is suboptimal as $n$ grows. We introduce a two-stage procedure -- meta-training followed by fine-tuning -- to address this problem. We first learn a meta-policy for all objectives using multitask learning. Then, we adapt the meta-policy to multiple randomly sampled subsets of objectives. The adaptation step leverages a first-order approximation property of well-trained policy networks, which is empirically verified to be accurate within a $2%$ error margin across various RL environments. The resulting algorithm, PolicyGradEx, efficiently estimates an aggregate task-affinity score matrix given a policy evaluation algorithm. Based on the estimated affinity score matrix, we cluster the $n$ objectives into $k$ groups by maximizing the intra-cluster affinity scores. Experiments on three robotic control and the Meta-World benchmarks demonstrate that our approach outperforms state-of-the-art baselines by $16%$ on average, while delivering up to $26 imes$ faster speedup relative to performing full training to obtain the clusters. Ablation studies validate each component of our approach. For instance, compared with random grouping and gradient-similarity-based grouping, our loss-based clustering yields an improvement of $19%$. Finally, we analyze the generalization error of policy networks by measuring the Hessian trace of the loss surface, which gives non-vacuous measures relative to the observed generalization errors.
Problem

Research questions and friction points this paper is trying to address.

Efficiently estimating policies that optimize multiple objectives in reinforcement learning
Finding optimal partition of n objectives into k groups for related training
Addressing suboptimal performance when learning single policy for many objectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage meta-training and fine-tuning for multi-objective RL
PolicyGradEx algorithm estimates task-affinity scores efficiently
Loss-based clustering groups objectives by maximizing intra-cluster affinity
๐Ÿ”Ž Similar Papers
No similar papers found.
Zhenshuo Zhang
Zhenshuo Zhang
CS PhD student at Northeastern University
machine learning
M
Minxuan Duan
Northeastern University, Boston, Massachusetts
Y
Youran Ye
Northeastern University, Boston, Massachusetts
H
Hongyang R. Zhang
Northeastern University, Boston, Massachusetts