๐ค AI Summary
This work addresses the data inefficiency commonly encountered in reinforcement learning during post-training phases, where interaction data are scarce and prone to becoming outdated. The authors propose a novel policy optimization objective based on truncated importance sampling, innovatively incorporating log-ratio Gaussian trust weights to softly suppress extreme importance ratios while preserving non-zero gradients. By replacing hard truncation with an adjustable implicit update constraint, the method achieves a favorable balance between stability and robustness under limited sample budgets. Theoretical analysis grounded in concentration inequalities demonstrates improved bias-variance trade-offs, and empirical evaluations across varying replay buffer sizes consistently show enhanced training stability and sample efficiency.
๐ Abstract
Post-training with reinforcement learning (RL) has recently shown strong promise for advancing multimodal agents beyond supervised imitation. However, RL remains limited by poor data efficiency, particularly in settings where interaction data are scarce and quickly become outdated. To address this challenge, GIPO (Gaussian Importance sampling Policy Optimization) is proposed as a policy optimization objective based on truncated importance sampling, replacing hard clipping with a log-ratio-based Gaussian trust weight to softly damp extreme importance ratios while maintaining non-zero gradients. Theoretical analysis shows that GIPO introduces an implicit, tunable constraint on the update magnitude, while concentration bounds guarantee robustness and stability under finite-sample estimation. Experimental results show that GIPO achieves state-of-the-art performance among clipping-based baselines across a wide range of replay buffer sizes, from near on-policy to highly stale data, while exhibiting superior bias--variance trade-off, high training stability and improved sample efficiency.