Interface Laplace Learning: Learnable Interface Term Helps Semi-Supervised Learning

๐Ÿ“… 2024-08-10
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing graph-based semi-supervised learning (GSSL) methods neglect inter-class boundary non-smoothness under extremely low labeling rates (<0.1%), as conventional Laplacian regularization imposes an overly strong global smoothness assumption that fails to capture sharp decision boundaries. Method: We propose a learnable interface termโ€”novel in GSSLโ€”that automatically identifies class boundaries within k-hop neighborhoods and end-to-end learns their weights, eliminating the need for hand-crafted boundary priors. By jointly optimizing the graph Laplacian regularizer and this adaptive interface term, our approach explicitly models boundary discontinuities. Contribution/Results: Extensive experiments on MNIST, FashionMNIST, and CIFAR-10 demonstrate that our method significantly outperforms state-of-the-art graph-based semi-supervised approaches, especially under ultra-sparse labeling. The consistent performance gains validate both the effectiveness and generalizability of interface-aware modeling in low-label-rate regimes.

Technology Category

Application Category

๐Ÿ“ Abstract
We introduce a novel framework, called Interface Laplace learning, for graph-based semi-supervised learning. Motivated by the observation that an interface should exist between different classes where the function value is non-smooth, we introduce a Laplace learning model that incorporates an interface term. This model challenges the long-standing assumption that functions are smooth at all unlabeled points. In the proposed approach, we add an interface term to the Laplace learning model at the interface positions. We provide a practical algorithm to approximate the interface positions using k-hop neighborhood indices, and to learn the interface term from labeled data without artificial design. Our method is efficient and effective, and we present extensive experiments demonstrating that Interface Laplace learning achieves better performance than other recent semi-supervised learning approaches at extremely low label rates on the MNIST, FashionMNIST, and CIFAR-10 datasets.
Problem

Research questions and friction points this paper is trying to address.

Improves semi-supervised learning with interface term
Challenges smoothness assumption at unlabeled points
Detects interface positions using k-hop neighborhoods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces learnable interface term for Laplace learning
Approximates interface positions using k-hop neighborhoods
Achieves better performance at low label rates