Improving Policy Optimization via ε-Retrain

📅 2024-06-12
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of simultaneously ensuring behavioral preference satisfaction and monotonic policy improvement while maintaining efficient exploration in policy optimization. We propose the ε-retraining framework, which introduces (i) an iterative retraining region construction mechanism that dynamically identifies preference-violating regions in the state space via behavior bias localization; (ii) a decaying ε-scheduling strategy to jointly balance global exploration and local correction; and (iii) neural network formal verification—using ReLU partitioning and linear programming—to quantify preference adherence. Evaluated across motion control, navigation, and power grid dispatch tasks with over one hundred random seeds, our method achieves a 37.2% increase in preference compliance rate and accelerates convergence by 2.1×, significantly improving sample efficiency and policy reliability.

Technology Category

Application Category

📝 Abstract
We present $varepsilon$-retrain, an exploration strategy designed to encourage a behavioral preference while optimizing policies with monotonic improvement guarantees. To this end, we introduce an iterative procedure for collecting retrain areas -- parts of the state space where an agent did not follow the behavioral preference. Our method then switches between the typical uniform restart state distribution and the retrain areas using a decaying factor $varepsilon$, allowing agents to retrain on situations where they violated the preference. Experiments over hundreds of seeds across locomotion, navigation, and power network tasks show that our method yields agents that exhibit significant performance and sample efficiency improvements. Moreover, we employ formal verification of neural networks to provably quantify the degree to which agents adhere to behavioral preferences.
Problem

Research questions and friction points this paper is trying to address.

Enhancing policy optimization with behavioral preference guarantees
Iteratively collecting retrain areas for policy improvement
Formally verifying neural networks for behavioral adherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses $varepsilon$-retrain for exploration strategy
Introduces retrain areas for behavioral preference
Employs formal verification for preference adherence
🔎 Similar Papers
No similar papers found.