๐ค AI Summary
This work addresses the slow convergence and training instability commonly observed in off-policy reinforcement learning for high-dimensional robotic control, which often stem from error accumulation in the critic. The authors propose an enhanced Soft Actor-Critic framework that, for the first time, incorporates scaling principles from supervised learning into off-policy reinforcement learning. By increasing model capacity and data throughput, reducing gradient update frequency, and explicitly constraining the norms of weights, features, and gradients to suppress error propagation, the method achieves significantly better performance than PPO and strong off-policy baselines across more than 60 tasks in 10 simulated environments. Notably, it excels in high-dimensional dexterous manipulation tasks and reduces sim-to-real transfer training time for humanoid robots from hours to minutes.
๐ Abstract
Reinforcement learning (RL) is a core approach for robot control when expert demonstrations are unavailable. On-policy methods such as Proximal Policy Optimization (PPO) are widely used for their stability, but their reliance on narrowly distributed on-policy data limits accurate policy evaluation in high-dimensional state and action spaces. Off-policy methods can overcome this limitation by learning from a broader state-action distribution, yet suffer from slow convergence and instability, as fitting a value function over diverse data requires many gradient updates, causing critic errors to accumulate through bootstrapping. We present FlashSAC, a fast and stable off-policy RL algorithm built on Soft Actor-Critic. Motivated by scaling laws observed in supervised learning, FlashSAC sharply reduces gradient updates while compensating with larger models and higher data throughput. To maintain stability at increased scale, FlashSAC explicitly bounds weight, feature, and gradient norms, curbing critic error accumulation. Across over 60 tasks in 10 simulators, FlashSAC consistently outperforms PPO and strong off-policy baselines in both final performance and training efficiency, with the largest gains on high-dimensional tasks such as dexterous manipulation. In sim-to-real humanoid locomotion, FlashSAC reduces training time from hours to minutes, demonstrating the promise of off-policy RL for sim-to-real transfer.