🤖 AI Summary
In multi-task reinforcement learning, cross-task knowledge transfer remains inefficient, and policy sharing lacks dynamic selection mechanisms. This paper proposes the Cross-Task Policy Guidance (CTPG) framework, whose core innovation is a dual-gating mechanism: one gate selects high-quality, transferable policies from source tasks, while the other identifies critical decision phases in target tasks that require targeted guidance—enabling fine-grained, dynamic regulation of policy sharing. CTPG is compatible with mainstream parameter-sharing paradigms and supports diverse network architectures and optimizers. Evaluated on multi-task benchmarks spanning manipulation and locomotion control, CTPG significantly reduces convergence time for new tasks (average speedup of 37%) and improves final policy performance (+12.6% average return), demonstrating dual advantages in rapid skill acquisition and robust cross-task transfer.
📝 Abstract
Multi-task reinforcement learning endeavors to efficiently leverage shared information across various tasks, facilitating the simultaneous learning of multiple tasks. Existing approaches primarily focus on parameter sharing with carefully designed network structures or tailored optimization procedures. However, they overlook a direct and complementary way to exploit cross-task similarities: the control policies of tasks already proficient in some skills can provide explicit guidance for unmastered tasks to accelerate skills acquisition. To this end, we present a novel framework called Cross-Task Policy Guidance (CTPG), which trains a guide policy for each task to select the behavior policy interacting with the environment from all tasks' control policies, generating better training trajectories. In addition, we propose two gating mechanisms to improve the learning efficiency of CTPG: one gate filters out control policies that are not beneficial for guidance, while the other gate blocks tasks that do not necessitate guidance. CTPG is a general framework adaptable to existing parameter sharing approaches. Empirical evaluations demonstrate that incorporating CTPG with these approaches significantly enhances performance in manipulation and locomotion benchmarks.