🤖 AI Summary
This paper addresses the challenge of rigorously assessing robustness and fairness of neural networks deployed in closed-loop systems, where repeated execution amplifies sensitivity to input perturbations. We propose a quantitative analytical framework grounded in the *modulus of continuity*, enabling localized continuity characterization. Our key theoretical contribution establishes a novel connection between generalized derivatives and the modulus of continuity, which informs a non-uniform random sampling strategy—overcoming the accuracy limitations of uniform sampling in high-curvature regions. Integrating probability theory, function space analysis, and generalized differential calculus, the method enables efficient, adaptive estimation of local Lipschitz-like behavior. Experiments demonstrate substantial improvements in tightness of robustness bounds and sensitivity of fairness discrimination, particularly under distributional shifts common in closed-loop operation. The framework provides a mathematically verifiable tool for certifying trustworthiness in safety-critical AI deployments.
📝 Abstract
Modulus of local continuity is used to evaluate the robustness of neural networks and fairness of their repeated uses in closed-loop models. Here, we revisit a connection between generalized derivatives and moduli of local continuity, and present a non-uniform stochastic sample approximation for moduli of local continuity. This is of importance in studying robustness of neural networks and fairness of their repeated uses.