๐ค AI Summary
This paper studies the sequential decision-making problem of maximizing the Sharpe ratioโi.e., risk-adjusted returnโin stochastic multi-armed bandits, departing from the conventional cumulative-reward maximization paradigm. We propose SRTS, the first Thompson sampling algorithm specifically designed for Sharpe ratio optimization, which jointly performs Bayesian estimation of reward means and variances under Gaussian rewards. Theoretically, we introduce a novel Sharpe-ratio regret decomposition framework and derive matching upper and lower bounds, rigorously establishing that SRTS achieves logarithmic-order optimal regret. Empirically, SRTS significantly outperforms existing methods across diverse risk-sensitive settings. Our core contributions are threefold: (i) the first formal formulation of Sharpe ratio optimization as a bandit problem; (ii) the design of an analyzable, risk-aware sampling mechanism grounded in Bayesian inference; and (iii) the unification of theoretical optimality and empirical superiority in a single algorithm.
๐ Abstract
In this paper, we investigate the problem of sequential decision-making for Sharpe ratio (SR) maximization in a stochastic bandit setting. We focus on the Thompson Sampling (TS) algorithm, a Bayesian approach celebrated for its empirical performance and exploration efficiency, under the assumption of Gaussian rewards with unknown parameters. Unlike conventional bandit objectives focusing on maximizing cumulative reward, Sharpe ratio optimization instead introduces an inherent tradeoff between achieving high returns and controlling risk, demanding careful exploration of both mean and variance. Our theoretical contributions include a novel regret decomposition specifically designed for the Sharpe ratio, highlighting the role of information acquisition about the reward distribution in driving learning efficiency. Then, we establish fundamental performance limits for the proposed algorithm exttt{SRTS} in terms of an upper bound on regret. We also derive the matching lower bound and show the order-optimality. Our results show that Thompson Sampling achieves logarithmic regret over time, with distribution-dependent factors capturing the difficulty of distinguishing arms based on risk-adjusted performance. Empirical simulations show that our algorithm significantly outperforms existing algorithms.