Deep Learning for Subspace Regression

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In high-dimensional parameter spaces, linear subspace interpolation becomes ill-posed and conventional interpolation methods fail. To address this, we propose a neural-network-based subspace regression framework that formulates subspace interpolation as a mapping-learning task from parameters to the Grassmann manifold. Our key innovation is “hyperspace prediction”: instead of directly regressing low-dimensional subspaces, we learn higher-dimensional—but geometrically smoother—intermediate subspaces, thereby reducing mapping complexity and enhancing generalization. We establish theoretical guarantees showing uniform approximation for elliptic eigenvalue problems under this strategy. Furthermore, we design a Riemannian loss function explicitly respecting the intrinsic geometry of the Grassmann manifold. Experiments on parametric PDEs and eigenvalue problems demonstrate that our method significantly improves subspace prediction accuracy and online computational stability over traditional interpolation approaches in model order reduction tasks.

Technology Category

Application Category

📝 Abstract
It is often possible to perform reduced order modelling by specifying linear subspace which accurately captures the dynamics of the system. This approach becomes especially appealing when linear subspace explicitly depends on parameters of the problem. A practical way to apply such a scheme is to compute subspaces for a selected set of parameters in the computationally demanding offline stage and in the online stage approximate subspace for unknown parameters by interpolation. For realistic problems the space of parameters is high dimensional, which renders classical interpolation strategies infeasible or unreliable. We propose to relax the interpolation problem to regression, introduce several loss functions suitable for subspace data, and use a neural network as an approximation to high-dimensional target function. To further simplify a learning problem we introduce redundancy: in place of predicting subspace of a given dimension we predict larger subspace. We show theoretically that this strategy decreases the complexity of the mapping for elliptic eigenproblems with constant coefficients and makes the mapping smoother for general smooth function on the Grassmann manifold. Empirical results also show that accuracy significantly improves when larger-than-needed subspaces are predicted. With the set of numerical illustrations we demonstrate that subspace regression can be useful for a range of tasks including parametric eigenproblems, deflation techniques, relaxation methods, optimal control and solution of parametric partial differential equations.
Problem

Research questions and friction points this paper is trying to address.

Predicting parameter-dependent subspaces for reduced order modeling
Using neural networks for high-dimensional subspace regression
Improving accuracy by predicting redundant subspaces for interpolation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural network approximates high-dimensional subspace regression
Redundant subspace prediction simplifies learning complexity
Loss functions designed for Grassmann manifold data interpolation
🔎 Similar Papers
No similar papers found.