🤖 AI Summary
This work investigates one-bit quantization of all weights except those in the output layer for multilayer random feature models. Leveraging high-dimensional probability theory and asymptotic analysis, we provide the first rigorous proof that, in the wide-network limit, such quantization preserves generalization performance exactly. We derive an asymptotically exact closed-form expression for the generalization error applicable to models with arbitrary depth—significantly extending prior results limited to single-layer or highly structured settings. Empirical evaluation demonstrates substantial inference speedup on consumer-grade laptop GPUs, confirming both efficacy and practicality. Our core contributions are: (1) establishing the precise condition—i.e., the “lossless generalization boundary”—under which full-layer one-bit weight quantization incurs no degradation in generalization; and (2) developing a rigorous, asymptotically exact error analysis framework for multilayer random feature models. The theoretical guarantees hold under mild distributional assumptions on data and random features, and the framework unifies treatment across architectural depths.
📝 Abstract
Recent advances in neural networks have led to significant computational and memory demands, spurring interest in one-bit weight compression to enable efficient inference on resource-constrained devices. However, the theoretical underpinnings of such compression remain poorly understood. We address this gap by analyzing one-bit quantization in the Random Features model, a simplified framework that corresponds to neural networks with random representations. We prove that, asymptotically, quantizing weights of all layers except the last incurs no loss in generalization error, compared to the full precision random features model. Our findings offer theoretical insights into neural network compression. We also demonstrate empirically that one-bit quantization leads to significant inference speed ups for the Random Features models even on a laptop GPU, confirming the practical benefits of our work. Additionally, we provide an asymptotically precise characterization of the generalization error for Random Features with an arbitrary number of layers. To the best of our knowledge, our analysis yields more general results than all previous works in the related literature.