🤖 AI Summary
This paper investigates the descriptive complexity of neural networks with arbitrary topologies and piecewise-polynomial activation functions under floating-point real-number semantics, focusing on logical characterization of their infinite-step dynamical behavior.
Method: We establish a polynomial-time, bidirectionally tight translation between neural networks and recursive Boolean rule logic, integrating Boolean network modeling, diamond-free modal substitution calculus, recursive Boolean circuits, and floating-point semantic modeling.
Contributions/Results: Our translation unifies equivalence modeling for general topologies and diverse piecewise-polynomial activations—including linear ones—for the first time. Network size and corresponding Boolean formula size are polynomially related, with forward translation exhibiting only linear blowup. This yields a natural logical representation for feedforward networks and provides theoretical guarantees for activation function substitutability. Furthermore, we precisely characterize temporal delay complexity: polylogarithmic (parallel) and linear (sequential) dynamics are logically distinguishable. The framework thus bridges continuous neural dynamics with discrete logical reasoning under realistic floating-point computation.
📝 Abstract
We investigate the descriptive complexity of a class of neural networks with unrestricted topologies and piecewise polynomial activation functions. We consider the general scenario where the networks run for an unlimited number of rounds and floating-point numbers are used to simulate reals. We characterize these neural networks with a recursive rule-based logic for Boolean networks. In particular, we show that the sizes of the neural networks and the corresponding Boolean rule formulae are polynomially related. In fact, in the direction from Boolean rules to neural networks, the blow-up is only linear. Our translations result in a time delay, which is the number of rounds that it takes for an object's translation to simulate a single round of the object. In the translation from neural networks to Boolean rules, the time delay of the resulting formula is polylogarithmic in the neural network size. In the converse translation, the time delay of the neural network is linear in the formula size. As a corollary, by restricting our logic, we obtain a similar characterization for classical feedforward neural networks. We also obtain translations between the rule-based logic for Boolean networks, the diamond-free fragment of modal substitution calculus and a class of recursive Boolean circuits where the number of input and output gates match. Ultimately, our translations offer a method of translating a given neural network into an equivalent neural network with different activation functions, including linear activation functions!