🤖 AI Summary
This study addresses the severe out-of-sample performance degradation of traditional minimum-variance portfolio strategies in high-dimensional settings where the number of assets exceeds the sample size, a regime prone to overfitting. The authors propose, for the first time, the application of Ridgelet estimators to construct zero-variance portfolios that exhibit strong out-of-sample generalization under overparameterized conditions. Theoretical analysis demonstrates that the proposed approach achieves minimax-optimal risk in high dimensions and exhibits a double descent phenomenon. Extensive simulations and empirical results confirm that this method significantly outperforms the unregularized pseudoinverse benchmark, offering superior robustness and competitiveness in high-dimensional portfolio optimization scenarios.
📝 Abstract
When the number of assets is larger than the sample size, the minimum variance portfolio interpolates the training data, delivering pathological zero in-sample variance. We show that if the weights of the zero variance portfolio are learned by a novel ``Ridgelet'' estimator, in a new test data this portfolio enjoys out-of-sample generalizability. It exhibits the double descent phenomenon and can achieve optimal risk in the overparametrized regime when the number of assets dominates the sample size. In contrast, a ``Ridgeless'' estimator which invokes the pseudoinverse fails in-sample interpolation and diverges away from out-of-sample optimality. Extensive simulations and empirical studies demonstrate that the Ridgelet method performs competitively in high-dimensional portfolio optimization.