Order Optimal Regret Bounds for Sharpe Ratio Optimization in the Bandit Setting

๐Ÿ“… 2025-08-19
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper studies the sequential decision-making problem of maximizing the Sharpe ratioโ€”i.e., risk-adjusted returnโ€”in stochastic multi-armed bandits, departing from the conventional cumulative-reward maximization paradigm. We propose SRTS, the first Thompson sampling algorithm specifically designed for Sharpe ratio optimization, which jointly performs Bayesian estimation of reward means and variances under Gaussian rewards. Theoretically, we introduce a novel Sharpe-ratio regret decomposition framework and derive matching upper and lower bounds, rigorously establishing that SRTS achieves logarithmic-order optimal regret. Empirically, SRTS significantly outperforms existing methods across diverse risk-sensitive settings. Our core contributions are threefold: (i) the first formal formulation of Sharpe ratio optimization as a bandit problem; (ii) the design of an analyzable, risk-aware sampling mechanism grounded in Bayesian inference; and (iii) the unification of theoretical optimality and empirical superiority in a single algorithm.

Technology Category

Application Category

๐Ÿ“ Abstract
In this paper, we investigate the problem of sequential decision-making for Sharpe ratio (SR) maximization in a stochastic bandit setting. We focus on the Thompson Sampling (TS) algorithm, a Bayesian approach celebrated for its empirical performance and exploration efficiency, under the assumption of Gaussian rewards with unknown parameters. Unlike conventional bandit objectives focusing on maximizing cumulative reward, Sharpe ratio optimization instead introduces an inherent tradeoff between achieving high returns and controlling risk, demanding careful exploration of both mean and variance. Our theoretical contributions include a novel regret decomposition specifically designed for the Sharpe ratio, highlighting the role of information acquisition about the reward distribution in driving learning efficiency. Then, we establish fundamental performance limits for the proposed algorithm exttt{SRTS} in terms of an upper bound on regret. We also derive the matching lower bound and show the order-optimality. Our results show that Thompson Sampling achieves logarithmic regret over time, with distribution-dependent factors capturing the difficulty of distinguishing arms based on risk-adjusted performance. Empirical simulations show that our algorithm significantly outperforms existing algorithms.
Problem

Research questions and friction points this paper is trying to address.

Optimizing Sharpe ratio in bandit setting with risk-return tradeoff
Developing Thompson Sampling algorithm for unknown Gaussian rewards
Establishing order-optimal regret bounds for risk-adjusted performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Thompson Sampling for Sharpe ratio optimization
Novel regret decomposition for risk-adjusted performance
Order-optimal logarithmic regret bounds established
๐Ÿ”Ž Similar Papers
No similar papers found.