🤖 AI Summary
This paper addresses two key limitations of conventional maximum score estimators: (i) non-differentiability due to the use of indicator functions, hindering gradient-based optimization; and (ii) inapplicability to multi-index single-crossing (MISC) structures. To overcome these, we propose a novel differentiable maximum score estimation framework based on the ReLU activation function. Methodologically, ReLU replaces the discontinuous indicator function, yielding a smooth objective amenable to gradient optimization and seamless integration with deep neural networks. Theoretically, we establish, for the first time, the convergence rate $n^{-s/(2s+1)}$ and asymptotic normality of the estimator under the MISC setting. Practically, we design a dedicated DNN layer enabling end-to-end estimation. Our work breaks through the classical estimator’s non-convexity and non-smoothness bottlenecks, extending maximum score estimation to a differentiable, scalable, and learnable paradigm—uniquely combining statistical rigor with deep learning compatibility.
📝 Abstract
We propose a new formulation of the maximum score estimator that uses compositions of rectified linear unit (ReLU) functions, instead of indicator functions as in Manski (1975,1985), to encode the sign alignment restrictions. Since the ReLU function is Lipschitz, our new ReLU-based maximum score criterion function is substantially easier to optimize using standard gradient-based optimization pacakges. We also show that our ReLU-based maximum score (RMS) estimator can be generalized to an umbrella framework defined by multi-index single-crossing (MISC) conditions, while the original maximum score estimator cannot be applied. We establish the $n^{-s/(2s+1)}$ convergence rate and asymptotic normality for the RMS estimator under order-$s$ Holder smoothness. In addition, we propose an alternative estimator using a further reformulation of RMS as a special layer in a deep neural network (DNN) architecture, which allows the estimation procedure to be implemented via state-of-the-art software and hardware for DNN.