🤖 AI Summary
This work investigates a class of neural networks whose activation functions satisfy Riccati-type ordinary differential equations, yielding outputs that are Pfaffian functions over analytic domains. By integrating the theory of Pfaffian functions, differential equation constraints, and methods from algebraic topology, the study establishes—for the first time—an architecture-dependent upper bound on topological complexity. This bound uniformly controls the total Betti numbers of both superlevel sets and Lie bracket rank deficiency sets across all possible weight configurations. The result demonstrates that the topological complexity of the network’s output is entirely determined by its architecture and independent of specific parameter values, thereby offering a novel theoretical framework for understanding the geometric and topological properties of neural networks.
📝 Abstract
We show that neural networks with activations satisfying a Riccati-type ordinary differential equation condition, an assumption arising in recent universal approximation results in the uniform topology, produce Pfaffian outputs on analytic domains with format controlled only by the architecture. Consequently, superlevel sets, as well as Lie bracket rank drop loci for neural network parameterized vector fields, admit architecture-only bounds on topological complexity, in particular on total Betti numbers, uniformly over all weights.