🤖 AI Summary
This paper addresses the estimation and optimization of two classes of generalized convex risk measures—utility-based shortfall risk (UBSR) and optimized certainty equivalents (OCE)—for robust risk control in finance and decision-making. Methodologically, it establishes the first unified non-asymptotic error bounds for both measures, accommodating unbounded random variables; develops unbiased gradient estimators under differentiable parametrizations, with provable mean-squared error convergence rates; and embeds major risk measures—including VaR, CVaR, entropic risk, expected utility, and monotone mean-variance—within the UBSR/OCE framework. Leveraging sample-average approximation and stochastic gradient optimization, it designs efficient algorithms with controllable convergence rates. The contributions significantly broaden the applicability and statistical tractability of convex risk measures, enhancing their practical deployment in data-driven risk management.
📝 Abstract
We consider the problems of estimation and optimization of two popular convex risk mea- sures: utility-based shortfall risk (UBSR) and Optimized Certainty Equivalent (OCE) risk. We extend these risk measures to cover possibly unbounded random variables. We cover prominent risk measures like the entropic risk, expectile risk, monotone mean-variance risk, Value-at-Risk, and Conditional Value-at-Risk as few special cases of either the UBSR or the OCE risk. In the context of estimation, we derive non-asymptotic bounds on the mean absolute error (MAE) and mean-squared error (MSE) of the classical sample average approximation (SAA) estimators of both, the UBSR and the OCE. Next, in the context of optimization, we derive expressions for the UBSR gradient and the OCE gradient under a smooth parameterization. Utilizing these expres- sions, we propose gradient estimators for both, the UBSR and the OCE. We use the SAA estimator of UBSR in both these gradient estimators, and derive non-asymptotic bounds on MAE and MSE for the proposed gradient estimation schemes. We incorporate the aforementioned gradient estima- tors into a stochastic gradient (SG) algorithm for optimization. Finally, we derive non-asymptotic bounds that quantify the rate of convergence of our SG algorithm for the optimization of the UBSR and the OCE risk measure