Symmetric Perceptrons, Number Partitioning and Lattices

📅 2025-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the average-case computational complexity of the symmetric binary perceptron (SBP) and the number partitioning problem (NPP). Leveraging tight worst-case-to-average-case reductions to the shortest vector problem (SVP) on lattices, we establish the first rigorous optimality results under the SVP hardness assumption: both the Bansal–Spencer algorithm (with margin κ = Θ(1/√n)) and the Karmarkar–Karp algorithm (κ = 2⁻ᴼ(log²m)) are asymptotically optimal. Consequently, we derive sharp average-case hardness lower bounds—κ = Õ(1/√n) for SBP and κ = 2⁻Ω(log³m) for NPP—fully closing the long-standing computational–statistical gap and confirming the central conjecture posed at FOCS’22. Our analysis integrates techniques from lattice-based cryptography, high-dimensional probability, and random matrix theory.

Technology Category

Application Category

📝 Abstract
The symmetric binary perceptron ($mathrm{SBP}_{kappa}$) problem with parameter $kappa : mathbb{R}_{geq1} o [0,1]$ is an average-case search problem defined as follows: given a random Gaussian matrix $mathbf{A} sim mathcal{N}(0,1)^{n imes m}$ as input where $m geq n$, output a vector $mathbf{x} in {-1,1}^m$ such that $$|| mathbf{A} mathbf{x} ||_{infty} leq kappa(m/n) cdot sqrt{m}~.$$ The number partitioning problem ($mathrm{NPP}_{kappa}$) corresponds to the special case of setting $n=1$. There is considerable evidence that both problems exhibit large computational-statistical gaps. In this work, we show (nearly) tight average-case hardness for these problems, assuming the worst-case hardness of standard approximate shortest vector problems on lattices. For $mathrm{SBP}$, for large $n$, the best that efficient algorithms have been able to achieve is $kappa(x) = Theta(1/sqrt{x})$ (Bansal and Spencer, Random Structures and Algorithms 2020), which is a far cry from the statistical bound. The problem has been extensively studied in the TCS and statistics communities, and Gamarnik, Kizildag, Perkins and Xu (FOCS 2022) conjecture that Bansal-Spencer is tight: namely, $kappa(x) = widetilde{Theta}(1/sqrt{x})$ is the optimal value achieved by computationally efficient algorithms. We prove their conjecture assuming the worst-case hardness of approximating the shortest vector problem on lattices. For $mathrm{NPP}$, Karmarkar and Karp's classical differencing algorithm achieves $kappa(m) = 2^{-O(log^2 m)}~.$ We prove that Karmarkar-Karp is nearly tight: namely, no polynomial-time algorithm can achieve $kappa(m) = 2^{-Omega(log^3 m)}$, once again assuming the worst-case subexponential hardness of approximating the shortest vector problem on lattices to within a subexponential factor.
Problem

Research questions and friction points this paper is trying to address.

Symmetric Binary Perceptron
Number Partitioning Problem
Algorithmic Hardness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lattice-Based Cryptography
Symmetric Binary Perceptron
Number Partitioning Problem
🔎 Similar Papers
No similar papers found.