Can We Optimize Deep RL Policy Weights as Trajectory Modeling?

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency and sample dependence of gradient-based policy optimization in deep reinforcement learning (DRL). We propose a novel implicit policy learning paradigm that treats the weight trajectory of a policy network—collected offline during training—as a time-series data modality. Methodologically, we introduce the Transformer-based Implicit Policy Learning (TIPL) model, the first to apply autoregressive Transformer modeling directly to policy weight sequences, bypassing explicit gradient updates and instead predicting high-performance policy parameters end-to-end. Our key contributions are: (1) formalizing policy weight trajectories as a learnable data modality; (2) designing TIPL to accurately capture policy learning dynamics; and (3) enabling zero-shot, end-to-end inference of near-optimal policy weights. Experiments demonstrate significant improvements in convergence speed, sample efficiency, and cross-task generalization, establishing a viable gradient-free optimization pathway for DRL.

Technology Category

Application Category

📝 Abstract
Learning the optimal policy from a random network initialization is the theme of deep Reinforcement Learning (RL). As the scale of DRL training increases, treating DRL policy network weights as a new data modality and exploring the potential becomes appealing and possible. In this work, we focus on the policy learning path in deep RL, represented by the trajectory of network weights of historical policies, which reflects the evolvement of the policy learning process. Taking the idea of trajectory modeling with Transformer, we propose Transformer as Implicit Policy Learner (TIPL), which processes policy network weights in an autoregressive manner. We collect the policy learning path data by running independent RL training trials, with which we then train our TIPL model. In the experiments, we demonstrate that TIPL is able to fit the implicit dynamics of policy learning and perform the optimization of policy network by inference.
Problem

Research questions and friction points this paper is trying to address.

Optimizing deep RL policy weights as trajectory modeling.
Exploring policy learning paths via historical network weights.
Using Transformer to model and optimize policy network dynamics.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Treats DRL policy weights as data modality
Uses Transformer for autoregressive weight processing
Trains TIPL model with policy learning path data
🔎 Similar Papers
No similar papers found.