🤖 AI Summary
This work addresses the lack of rigorous theoretical foundations for hypergraph semi-supervised learning. We present the first systematic asymptotic consistency analysis of variational learning on random geometric hypergraphs, rigorously characterizing necessary and sufficient conditions for convergence to a weighted $p$-Laplacian equation. We propose Higher-Order Hypergraph Learning (HOHL), a novel framework that achieves multi-scale regularization via powers of the skeleton graph Laplacian operator; we prove that its objective functional $Gamma$-converges to a higher-order Sobolev seminorm. By unifying random geometric hypergraph modeling, spectral graph theory, and higher-order Laplacian regularization, HOHL significantly improves problem well-posedness and generalization. Extensive experiments on standard benchmark datasets validate our theoretical findings and establish new state-of-the-art performance.
📝 Abstract
Hypergraphs provide a natural framework for modeling higher-order interactions, yet their theoretical underpinnings in semi-supervised learning remain limited. We provide an asymptotic consistency analysis of variational learning on random geometric hypergraphs, precisely characterizing the conditions ensuring the well-posedness of hypergraph learning as well as showing convergence to a weighted $p$-Laplacian equation. Motivated by this, we propose Higher-Order Hypergraph Learning (HOHL), which regularizes via powers of Laplacians from skeleton graphs for multiscale smoothness. HOHL converges to a higher-order Sobolev seminorm. Empirically, it performs strongly on standard baselines.