ReLU-Based and DNN-Based Generalized Maximum Score Estimators

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses two key limitations of conventional maximum score estimators: (i) non-differentiability due to the use of indicator functions, hindering gradient-based optimization; and (ii) inapplicability to multi-index single-crossing (MISC) structures. To overcome these, we propose a novel differentiable maximum score estimation framework based on the ReLU activation function. Methodologically, ReLU replaces the discontinuous indicator function, yielding a smooth objective amenable to gradient optimization and seamless integration with deep neural networks. Theoretically, we establish, for the first time, the convergence rate $n^{-s/(2s+1)}$ and asymptotic normality of the estimator under the MISC setting. Practically, we design a dedicated DNN layer enabling end-to-end estimation. Our work breaks through the classical estimator’s non-convexity and non-smoothness bottlenecks, extending maximum score estimation to a differentiable, scalable, and learnable paradigm—uniquely combining statistical rigor with deep learning compatibility.

Technology Category

Application Category

📝 Abstract
We propose a new formulation of the maximum score estimator that uses compositions of rectified linear unit (ReLU) functions, instead of indicator functions as in Manski (1975,1985), to encode the sign alignment restrictions. Since the ReLU function is Lipschitz, our new ReLU-based maximum score criterion function is substantially easier to optimize using standard gradient-based optimization pacakges. We also show that our ReLU-based maximum score (RMS) estimator can be generalized to an umbrella framework defined by multi-index single-crossing (MISC) conditions, while the original maximum score estimator cannot be applied. We establish the $n^{-s/(2s+1)}$ convergence rate and asymptotic normality for the RMS estimator under order-$s$ Holder smoothness. In addition, we propose an alternative estimator using a further reformulation of RMS as a special layer in a deep neural network (DNN) architecture, which allows the estimation procedure to be implemented via state-of-the-art software and hardware for DNN.
Problem

Research questions and friction points this paper is trying to address.

Proposes ReLU-based maximum score estimator replacing indicator functions
Generalizes framework to multi-index single-crossing conditions
Establishes convergence rates and asymptotic normality for new estimator
Innovation

Methods, ideas, or system contributions that make the work stand out.

ReLU functions replace indicator functions in estimation
Gradient-based optimization enabled by Lipschitz criterion function
Deep neural network reformulation for enhanced implementation
🔎 Similar Papers
No similar papers found.
X
Xiaohong Chen
Department of Economics and Cowles Foundation for Research in Economics, Yale University, 28 Hillhouse Ave, New Haven, CT 06511, USA
Wayne Yuan Gao
Wayne Yuan Gao
Department of Economics, University of Pennsylvania
EconometricsMicroeconomic TheoryNetworks
L
Likang Wen
Department of Applied Mathematics and Statistics, Johns Hopkins University, 3400 N Charles St, Baltimore, MD 21218, USA