Neural Lattice Reduction: A Self-Supervised Geometric Deep Learning Approach

📅 2023-11-14
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses lattice basis reduction—a classical combinatorial optimization problem—by proposing the first self-supervised geometric deep learning method that requires no labeled data. Methodologically, it parameterizes the space of reduction algorithms via neural networks and directly outputs decomposable unimodular matrices to optimize basis orthogonality. It is the first to embed isometric/scaling invariance and hyperoctahedral group equivariance into a self-supervised framework, and introduces a grid-convolutional architecture enabling joint reduction of multiple lattices and computational amortization. Experiments demonstrate that the method achieves reduction quality and time complexity comparable to the LLL algorithm on standard benchmarks; its grid-convolutional variant significantly improves efficiency for multi-lattice processing and shows strong practical utility in real-world applications such as wireless communications. The core contribution lies in establishing a novel paradigm for lattice reduction: unsupervised, geometric-prior-driven, and interpretable via unimodular factorization.
📝 Abstract
Lattice reduction is a combinatorial optimization problem aimed at finding the most orthogonal basis in a given lattice. The Lenstra-Lenstra-Lov'asz (LLL) algorithm is the best algorithm in the literature for solving this problem. In light of recent research on algorithm discovery, in this work, we would like to answer this question: is it possible to parametrize the algorithm space for lattice reduction problem with neural networks and find an algorithm without supervised data? Our strategy is to use equivariant and invariant parametrizations and train in a self-supervised way. We design a deep neural model outputting factorized unimodular matrices and train it in a self-supervised manner by penalizing non-orthogonal lattice bases. We incorporate the symmetries of lattice reduction into the model by making it invariant to isometries and scaling of the ambient space and equivariant with respect to the hyperocrahedral group permuting and flipping the lattice basis elements. We show that this approach yields an algorithm with comparable complexity and performance to the LLL algorithm on a set of benchmarks. Additionally, motivated by certain applications for wireless communication, we extend our method to a convolutional architecture which performs joint reduction of spatially-correlated lattices arranged in a grid, thereby amortizing its cost over multiple lattices.
Problem

Research questions and friction points this paper is trying to address.

Self-supervised neural lattice reduction algorithm
Parametrize algorithm space without supervised data
Convolutional architecture for joint lattice reduction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised geometric deep learning
Equivariant and invariant parametrizations
Convolutional architecture for joint reduction
🔎 Similar Papers
No similar papers found.