From Neural Networks to Logical Theories: The Correspondence between Fibring Modal Logics and Fibring Neural Networks

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work establishes, for the first time, a formal correspondence between neural network fibrations and modal logic fibrations. Addressing the lack of a unified computational-logical foundation for neural network logical interpretability, we propose a compatible fibration-based modeling framework that couples the pre-activation mechanisms of neural architectures—including GNNs, GATs, and Transformer encoders—with the semantic models of fibred modal logic. Within this neuro-symbolic analytical framework, we rigorously characterize and comparatively analyze the heterogeneous logical expressivity of these three model classes, exposing their intrinsic differences in logical representation. Our main contributions are: (1) the first rigorous proof of equivalence between fibred neural networks and fibred modal logics; (2) an extensible fibred semantic modeling methodology; and (3) novel theoretical tools and an analytical paradigm for advancing logical interpretability of neural networks.

Technology Category

Application Category

📝 Abstract
Fibring of modal logics is a well-established formalism for combining countable families of modal logics into a single fibred language with common semantics, characterized by fibred models. Inspired by this formalism, fibring of neural networks was introduced as a neurosymbolic framework for combining learning and reasoning in neural networks. Fibring of neural networks uses the (pre-)activations of a trained network to evaluate a fibring function computing the weights of another network whose outputs are injected back into the original network. However, the exact correspondence between fibring of neural networks and fibring of modal logics was never formally established. In this paper, we close this gap by formalizing the idea of fibred models emph{compatible} with fibred neural networks. Using this correspondence, we then derive non-uniform logical expressiveness results for Graph Neural Networks (GNNs), Graph Attention Networks (GATs) and Transformer encoders. Longer-term, the goal of this paper is to open the way for the use of fibring as a formalism for interpreting the logical theories learnt by neural networks with the tools of computational logic.
Problem

Research questions and friction points this paper is trying to address.

Establishing formal correspondence between fibring neural networks and modal logics
Deriving logical expressiveness results for GNNs GATs and Transformers
Developing fibring as formalism for interpreting neural network theories
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fibring neural networks with modal logics
Using pre-activations to compute weights
Deriving logical expressiveness for neural architectures
🔎 Similar Papers
No similar papers found.