🤖 AI Summary
Conventional hypergraph regularization suffers from insufficient higher-order smoothness. Method: This paper proposes Truncated Higher-Order Hypergraph Learning (HOHL), which constructs a higher-order smoothness regularizer based on powers of multi-scale hypergraph Laplacian operators. Contribution/Results: Theoretically, we establish the first asymptotic consistency guarantee for truncated HOHL and derive an explicit convergence rate under full supervision. Methodologically, we extend HOHL to active learning and non-geometric structured data, significantly enhancing model generality and robustness. Empirical results demonstrate that HOHL consistently outperforms baselines across diverse tasks—particularly excelling on data lacking intrinsic geometric structure, where it exhibits superior generalization. This work systematically establishes the theoretical foundations and practical applicability boundaries of HOHL, advancing higher-order hypergraph learning beyond geometric settings toward broader learning paradigms.
📝 Abstract
Higher-Order Hypergraph Learning (HOHL) was recently introduced as a principled alternative to classical hypergraph regularization, enforcing higher-order smoothness via powers of multiscale Laplacians induced by the hypergraph structure. Prior work established the well- and ill-posedness of HOHL through an asymptotic consistency analysis in geometric settings. We extend this theoretical foundation by proving the consistency of a truncated version of HOHL and deriving explicit convergence rates when HOHL is used as a regularizer in fully supervised learning. We further demonstrate its strong empirical performance in active learning and in datasets lacking an underlying geometric structure, highlighting HOHL's versatility and robustness across diverse learning settings.