🤖 AI Summary
Existing hyperbolic neural networks often mix Euclidean and hyperbolic operations, compromising geometric intrinsicness. This work proposes ILNN, the first fully intrinsic neural network in the Lorentz model, where all components are rigorously defined within hyperbolic space. Specifically, the decision function of fully connected layers is based on the distance from a point to a hyperbolic hyperplane, and novel intrinsic modules—including GyroLBN normalization, gyro-additive bias, digamma-aligned Lorentz concatenation, and Lorentz Dropout—are introduced. Evaluated on CIFAR-10/100, TEB, and GUE genomic benchmarks, ILNN achieves state-of-the-art performance and computational efficiency among hyperbolic models while consistently outperforming strong Euclidean baselines.
📝 Abstract
Real-world data frequently exhibit latent hierarchical structures, which can be naturally represented by hyperbolic geometry. Although recent hyperbolic neural networks have demonstrated promising results, many existing architectures remain partially intrinsic, mixing Euclidean operations with hyperbolic ones or relying on extrinsic parameterizations. To address it, we propose the \emph{Intrinsic Lorentz Neural Network} (ILNN), a fully intrinsic hyperbolic architecture that conducts all computations within the Lorentz model. At its core, the network introduces a novel \emph{point-to-hyperplane} fully connected layer (FC), replacing traditional Euclidean affine logits with closed-form hyperbolic distances from features to learned Lorentz hyperplanes, thereby ensuring that the resulting geometric decision functions respect the inherent curvature. Around this fundamental layer, we design intrinsic modules: GyroLBN, a Lorentz batch normalization that couples gyro-centering with gyro-scaling, consistently outperforming both LBN and GyroBN while reducing training time. We additionally proposed a gyro-additive bias for the FC output, a Lorentz patch-concatenation operator that aligns the expected log-radius across feature blocks via a digamma-based scale, and a Lorentz dropout layer. Extensive experiments conducted on CIFAR-10/100 and two genomic benchmarks (TEB and GUE) illustrate that ILNN achieves state-of-the-art performance and computational cost among hyperbolic models and consistently surpasses strong Euclidean baselines. The code is available at \href{https://github.com/Longchentong/ILNN}{\textcolor{magenta}{this url}}.