🤖 AI Summary
To address the lack of structural consistency and non-nested nature of representations across dimensions in low-dimensional subspace learning, this paper proposes a nested subspace modeling framework grounded in flag manifolds. The core methodological innovation is the systematic introduction of the “flag trick”—a novel geometric technique that enforces hierarchical, nested subspace sequences via nested orthogonal projection operators, while preserving the intrinsic optimization structure of the Grassmann manifold. This approach provides a geometric generalization of classical linear dimensionality reduction methods—including PCA and LDA—without requiring additional hyperparameters. Experiments demonstrate substantial improvements in cross-dimensional feature reusability and downstream task stability. Theoretical analysis confirms the framework’s mathematical soundness, and empirical evaluation on multiple benchmarks validates both its theoretical completeness and practical efficacy.
📝 Abstract
Many machine learning methods look for low-dimensional representations of the data. The underlying subspace can be estimated by first choosing a dimension $q$ and then optimizing a certain objective function over the space of $q$-dimensional subspaces (the Grassmannian). Trying different $q$ yields in general non-nested subspaces, which raises an important issue of consistency between the data representations. In this paper, we propose a simple trick to enforce nestedness in subspace learning methods. It consists in lifting Grassmannian optimization problems to flag manifolds (the space of nested subspaces of increasing dimension) via nested projectors. We apply the flag trick to several classical machine learning methods and show that it successfully addresses the nestedness issue.