🤖 AI Summary
This paper addresses the challenge of simultaneously modeling risk preferences and ensuring computational efficiency in distortion risk measure (DRM) optimization. We propose a stochastic approximation gradient descent algorithm grounded in a dual representation—combining decision-maker (DM) models and quantile functions (QF). The method integrates multi-timescale iterations (three- or two-scale), generalized likelihood ratio estimation, kernel density estimation, and quantile-based gradient computation to construct a robust-yet-efficient hybrid optimization framework: it ensures stability near jump discontinuities and accelerates convergence in smooth regions. We establish, for the first time, strong convergence guarantees with optimal rates—$O(k^{-4/7})$ for the DM representation and $O(k^{-2/3})$ for the QF representation. Empirically, the algorithm significantly outperforms baselines in robust portfolio selection and successfully extends to multi-echelon inventory management, demonstrating both generality and scalability.
📝 Abstract
Distortion Risk Measures (DRMs) capture risk preferences in decision-making and serve as general criteria for managing uncertainty. This paper proposes gradient descent algorithms for DRM optimization based on two dual representations: the Distortion-Measure (DM) form and Quantile-Function (QF) form. The DM-form employs a three-timescale algorithm to track quantiles, compute their gradients, and update decision variables, utilizing the Generalized Likelihood Ratio and kernel-based density estimation. The QF-form provides a simpler two-timescale approach that avoids the need for complex quantile gradient estimation. A hybrid form integrates both approaches, applying the DM-form for robust performance around distortion function jumps and the QF-form for efficiency in smooth regions. Proofs of strong convergence and convergence rates for the proposed algorithms are provided. In particular, the DM-form achieves an optimal rate of $O(k^{-4/7})$, while the QF-form attains a faster rate of $O(k^{-2/3})$. Numerical experiments confirm their effectiveness and demonstrate substantial improvements over baselines in robust portfolio selection tasks. The method's scalability is further illustrated through integration into deep reinforcement learning. Specifically, a DRM-based Proximal Policy Optimization algorithm is developed and applied to multi-echelon dynamic inventory management, showcasing its practical applicability.