π€ AI Summary
This work investigates expressive, verifiable, and interpretable querying of deep, unbounded feedforward neural networks. To this end, it introduces a weighted fixpoint logic with a Datalog-like syntax and a βrelaxedβ fixpoint mechanism, enabling accessibility analysis and verification over learned representations. The scalar-restricted variant of this logic enjoys polynomial-time data complexity and captures precisely all PTIME model-agnostic queries over polynomially bounded weight-restricted networks. Moreover, the paper establishes that certain simple model-agnostic queries are already NP-complete, thereby revealing the intrinsic computational hardness inherent in the problem.
π Abstract
Expressive querying of machine learning models - viewed as a form of intentional data - enables their verification and interpretation using declarative languages, thereby making learned representations of data more accessible. Motivated by the querying of feedforward neural networks, we investigate logics for weighted structures. In the absence of a bound on neural network depth, such logics must incorporate recursion; thereto we revisit the functional fixpoint mechanism proposed by Gr\"adel and Gurevich. We adopt it in a Datalog-like syntax; we extend normal forms for fixpoint logics to weighted structures; and show an equivalent"loose"fixpoint mechanism that allows values of inductively defined weight functions to be overwritten. We propose a"scalar"restriction of functional fixpoint logic, of polynomial-time data complexity, and show it can express all PTIME model-agnostic queries over reduced networks with polynomially bounded weights. In contrast, we show that very simple model-agnostic queries are already NP-complete. Finally, we consider transformations of weighted structures by iterated transductions.