🤖 AI Summary
In offline policy continuous control for deep reinforcement learning, a fundamental trade-off exists between pessimistic value function updates and exploration: excessive pessimism stifles exploration, while over-optimism induces instability and risk. This paper proposes Utility Soft Actor-Critic (USAC), the first framework enabling decoupled, interpretable, and continuous control of pessimism/optimism levels for both actor and critic. Its core innovation is an uncertainty-aware utility function that dynamically modulates critic conservatism along two orthogonal dimensions—temperature scaling and bias offset—while independently shaping actor policy updates. USAC transcends the conventional binary (pessimistic/optimistic) paradigm by modeling task-adaptive exploration–conservatism balance on a continuous spectrum. Evaluated on multiple continuous-control benchmarks, USAC consistently outperforms state-of-the-art algorithms including SAC and TD3, empirically validating the existence of task-dependent optimal pessimism levels and demonstrating the efficacy and generalizability of the proposed mechanism.
📝 Abstract
Off-policy actor-critic algorithms have shown promise in deep reinforcement learning for continuous control tasks. Their success largely stems from leveraging pessimistic state-action value function updates, which effectively address function approximation errors and improve performance. However, such pessimism can lead to under-exploration, constraining the agent's ability to explore/refine its policies. Conversely, optimism can counteract under-exploration, but it also carries the risk of excessive risk-taking and poor convergence if not properly balanced. Based on these insights, we introduce Utility Soft Actor-Critic (USAC), a novel framework within the actor-critic paradigm that enables independent control over the degree of pessimism/optimism for both the actor and the critic via interpretable parameters. USAC adapts its exploration strategy based on the uncertainty of critics through a utility function that allows us to balance between pessimism and optimism separately. By going beyond binary choices of optimism and pessimism, USAC represents a significant step towards achieving balance within off-policy actor-critic algorithms. Our experiments across various continuous control problems show that the degree of pessimism or optimism depends on the nature of the task. Furthermore, we demonstrate that USAC can outperform state-of-the-art algorithms for appropriately configured pessimism/optimism parameters.