🤖 AI Summary
This work investigates the minimal size of deep ReLU neural networks required to memorize \(N\) labeled data points, given a fixed width \(W\) and depth \(L\). For data points residing in the unit ball with pairwise separation distance \(\delta\), the authors construct a network satisfying \(W^2 L^2 = \mathcal{O}(N \log(1/\delta))\) and prove that \(W^2 L^2 = \Omega(N \log(1/\delta))\) is necessary. This is the first result to precisely characterize memorization capacity through the joint interplay of width and depth, moving beyond conventional analyses that rely solely on total parameters or neuron counts. The study reveals the optimal trade-off between width and depth, and when \(\delta^{-1}\) grows polynomially with \(N\), the proposed construction achieves near-optimal performance up to logarithmic factors.
📝 Abstract
This paper studies the memorization capacity of deep neural networks with ReLU activation. Specifically, we investigate the minimal size of such networks to memorize any $N$ data points in the unit ball with pairwise separation distance $δ$ and discrete labels. Most prior studies characterize the memorization capacity by the number of parameters or neurons. We generalize these results by constructing neural networks, whose width $W$ and depth $L$ satisfy $W^2L^2= \mathcal{O}(N\log(δ^{-1}))$, that can memorize any $N$ data samples. We also prove that any such networks should also satisfy the lower bound $W^2L^2=Ω(N \log(δ^{-1}))$, which implies that our construction is optimal up to logarithmic factors when $δ^{-1}$ is polynomial in $N$. Hence, we explicitly characterize the trade-off between width and depth for the memorization capacity of deep neural networks in this regime.