Efficient Multi-Task Reinforcement Learning with Cross-Task Policy Guidance

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-task reinforcement learning, cross-task knowledge transfer remains inefficient, and policy sharing lacks dynamic selection mechanisms. This paper proposes the Cross-Task Policy Guidance (CTPG) framework, whose core innovation is a dual-gating mechanism: one gate selects high-quality, transferable policies from source tasks, while the other identifies critical decision phases in target tasks that require targeted guidance—enabling fine-grained, dynamic regulation of policy sharing. CTPG is compatible with mainstream parameter-sharing paradigms and supports diverse network architectures and optimizers. Evaluated on multi-task benchmarks spanning manipulation and locomotion control, CTPG significantly reduces convergence time for new tasks (average speedup of 37%) and improves final policy performance (+12.6% average return), demonstrating dual advantages in rapid skill acquisition and robust cross-task transfer.

Technology Category

Application Category

📝 Abstract
Multi-task reinforcement learning endeavors to efficiently leverage shared information across various tasks, facilitating the simultaneous learning of multiple tasks. Existing approaches primarily focus on parameter sharing with carefully designed network structures or tailored optimization procedures. However, they overlook a direct and complementary way to exploit cross-task similarities: the control policies of tasks already proficient in some skills can provide explicit guidance for unmastered tasks to accelerate skills acquisition. To this end, we present a novel framework called Cross-Task Policy Guidance (CTPG), which trains a guide policy for each task to select the behavior policy interacting with the environment from all tasks' control policies, generating better training trajectories. In addition, we propose two gating mechanisms to improve the learning efficiency of CTPG: one gate filters out control policies that are not beneficial for guidance, while the other gate blocks tasks that do not necessitate guidance. CTPG is a general framework adaptable to existing parameter sharing approaches. Empirical evaluations demonstrate that incorporating CTPG with these approaches significantly enhances performance in manipulation and locomotion benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Leveraging cross-task similarities for multi-task reinforcement learning
Improving skills acquisition via guide policies from proficient tasks
Enhancing learning efficiency with adaptive gating mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-Task Policy Guidance (CTPG) framework
Two gating mechanisms for efficiency
Adaptable to parameter sharing approaches
🔎 Similar Papers
No similar papers found.
J
Jinmin He
Institute of Automation, Chinese Academy of Sciences, School of Artificial Intelligence, University of Chinese Academy of Sciences
K
Kai Li
Institute of Automation, Chinese Academy of Sciences, School of Artificial Intelligence, University of Chinese Academy of Sciences
Y
Yifan Zang
Institute of Automation, Chinese Academy of Sciences, School of Artificial Intelligence, University of Chinese Academy of Sciences
Haobo Fu
Haobo Fu
Tencent AI Lab, University of Birmingham
Reinforcement LearningEvolutionary Computation
Q
Qiang Fu
Tencent AI Lab
J
Junliang Xing
Tsinghua University
J
Jian Cheng
Institute of Automation, Chinese Academy of Sciences, AiRiA