🤖 AI Summary
Deep neural networks (DNNs) often suffer from degraded generalization due to overfitting, especially when training data is noisy. To address this, we propose a domain-knowledge-driven monotonicity regularization method: leveraging a linear reference model to explicitly encode prior monotonic relationships, we define monotonicity violations as deviations from this reference and formulate a differentiable penalty term integrated into the loss function. Our approach requires no architectural modifications and is compatible with arbitrary DNN architectures. Experiments on both synthetic and real-world datasets demonstrate that moderate monotonicity constraints effectively mitigate overfitting—enhancing model robustness and interpretability without sacrificing predictive accuracy. The core contribution is a principled, end-to-end trainable monotonicity regularization framework grounded in a linear reference model, enabling effective synergy between domain knowledge and deep learning.
📝 Abstract
While deep learning models excel at predictive tasks, they often overfit due to their complex structure and large number of parameters, causing them to memorize training data, including noise, rather than learn patterns that generalize to new data. To tackle this challenge, this paper proposes a new regularization method, i.e., Enforcing Domain-Informed Monotonicity in Deep Neural Networks (DIM), which maintains domain-informed monotonic relationships in complex deep learning models to further improve predictions. Specifically, our method enforces monotonicity by penalizing violations relative to a linear baseline, effectively encouraging the model to follow expected trends while preserving its predictive power. We formalize this approach through a comprehensive mathematical framework that establishes a linear reference, measures deviations from monotonic behavior, and integrates these measurements into the training objective. We test and validate the proposed methodology using a real-world ridesourcing dataset from Chicago and a synthetically created dataset. Experiments across various neural network architectures show that even modest monotonicity constraints consistently enhance model performance. DIM enhances the predictive performance of deep neural networks by applying domain-informed monotonicity constraints to regularize model behavior and mitigate overfitting