🤖 AI Summary
Hyperbolic neural networks (HNNs) excel at modeling hierarchical data but suffer from sensitivity to curvature choice; existing approaches lack theoretical grounding for curvature’s impact, and fixed curvature often leads to suboptimal convergence.
Method: We present the first PAC-Bayes–based analysis revealing how curvature in hyperspherical neural networks improves generalization by smoothing the loss landscape. Building on this insight, we propose a sharpness-aware curvature learning framework: upper-level optimization minimizes range sharpness, while lower-level optimization performs parameter updates—solved efficiently via implicit differentiation and gradient approximation.
Contribution/Results: Our theoretical analysis provides generalization error bounds and convergence guarantees. Experiments demonstrate consistent and significant improvements in generalization across classification, long-tailed recognition, noisy-label learning, and few-shot learning tasks.
📝 Abstract
Hyperbolic neural networks (HNNs) have demonstrated notable efficacy in representing real-world data with hierarchical structures via exploiting the geometric properties of hyperbolic spaces characterized by negative curvatures. Curvature plays a crucial role in optimizing HNNs. Inappropriate curvatures may cause HNNs to converge to suboptimal parameters, degrading overall performance. So far, the theoretical foundation of the effect of curvatures on HNNs has not been developed. In this paper, we derive a PAC-Bayesian generalization bound of HNNs, highlighting the role of curvatures in the generalization of HNNs via their effect on the smoothness of the loss landscape. Driven by the derived bound, we propose a sharpness-aware curvature learning method to smooth the loss landscape, thereby improving the generalization of HNNs. In our method,
we design a scope sharpness measure for curvatures, which is minimized through a bi-level optimization process. Then, we introduce an implicit differentiation algorithm that efficiently solves the bi-level optimization by approximating gradients of curvatures. We present the approximation error and convergence analyses of the proposed method, showing that the approximation error is upper-bounded, and the proposed method can converge by bounding gradients of HNNs. Experiments on four settings: classification, learning from long-tailed data, learning from noisy data, and few-shot learning show that our method can improve the performance of HNNs.