🤖 AI Summary
This paper addresses efficient linear programming (LP) under differential privacy, covering both homogeneous constraints (Ax ≥ 0) and general constraints (Ax ≤ b, x ≥ 0). We propose the first framework integrating a privatized rescaled perceptron algorithm with refined equality-constraint identification, yielding high-probability feasible solutions under (ε, δ)-differential privacy. Our theoretical contributions include: (i) the first unified treatment of positive- and zero-boundary constraints; (ii) significantly improved upper bounds on constraint violation—O(d²/ε · log²(d/δβ) · √log(1/ρ₀)) for homogeneous LP and O(d⁴/ε · log²·⁵(d/δ) · √log dU) for general LP—improving upon prior work by at least a factor of d⁵; and (iii) a polynomial-time algorithm that simultaneously ensures rigorous privacy guarantees and solution quality.
📝 Abstract
We study the problem of solving linear programs of the form $Axle b$, $xge0$ with differential privacy. For homogeneous LPs $Axge0$, we give an efficient $(ε,δ)$-differentially private algorithm which with probability at least $1-β$ finds in polynomial time a solution that satisfies all but $O(frac{d^{2}}εlog^{2}frac{d}{δβ}sqrt{logfrac{1}{ρ_{0}}})$ constraints, for problems with margin $ρ_{0}>0$. This improves the bound of $O(frac{d^{5}}εlog^{1.5}frac{1}{ρ_{0}}mathrm{poly}log(d,frac{1}δ,frac{1}β))$ by [Kaplan-Mansour-Moran-Stemmer-Tur, STOC '25]. For general LPs $Axle b$, $xge0$ with potentially zero margin, we give an efficient $(ε,δ)$-differentially private algorithm that w.h.p drops $O(frac{d^{4}}εlog^{2.5}frac{d}δsqrt{log dU})$ constraints, where $U$ is an upper bound for the entries of $A$ and $b$ in absolute value. This improves the result by Kaplan et al. by at least a factor of $d^{5}$. Our techniques build upon privatizing a rescaling perceptron algorithm by [Hoberg-Rothvoss, IPCO '17] and a more refined iterative procedure for identifying equality constraints by Kaplan et al.