🤖 AI Summary
This work investigates the efficient approximation of the median function over $d$-dimensional inputs using constant-depth, linear-width ReLU neural networks. The authors propose a novel network architecture based on a multi-stage iterative procedure that progressively eliminates non-central elements while preserving a candidate set containing the median. This approach achieves, for the first time, exponential-accuracy approximation of the median function with constant depth, thereby challenging prior lower-bound understandings regarding the depth required for approximating the max function. Moreover, the study establishes a general reduction from median to max approximation. On the unit hypercube, the resulting approximation error decays exponentially with respect to the network width, significantly outperforming existing results for max function approximation.
📝 Abstract
We study the approximation of the median of $d$ inputs using ReLU neural networks. We present depth-width tradeoffs under several settings, culminating in a constant-depth, linear-width construction that achieves exponentially small approximation error with respect to the uniform distribution over the unit hypercube. By further establishing a general reduction from the maximum to the median, our results break a barrier suggested by prior work on the maximum function, which indicated that linear width should require depth growing at least as $\log\log d$ to achieve comparable accuracy. Our construction relies on a multi-stage procedure that iteratively eliminates non-central elements while preserving a candidate set around the median. We overcome obstacles that do not arise for the maximum to yield approximation results that are strictly stronger than those previously known for the maximum itself.