Prime Convolutional Model: Breaking the Ground for Theoretical Explainability

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the theoretical interpretability of neural network behavior. Focusing on the Prime Convolutional model (p-Conv) for recognizing modulo-$m$ congruence classes over natural number sequences, it establishes the first XAI mathematical modeling paradigm grounded in scientific methodology. The approach introduces a custom convolutional architecture with a sliding-window context of length $B$, integrates modular arithmetic task design, and employs empirically guided theoretical induction to derive a provably correct mathematical model of neural behavior. Key contributions include: (i) rigorous derivation of necessary and sufficient number-theoretic conditions—expressed in terms of $m$ and $B$—for model convergence; and (ii) analytical criteria characterizing success, failure, and error patterns. Experimental validation confirms >99.9% accuracy under the derived conditions, precisely delineating the performance boundaries of p-Conv.

Technology Category

Application Category

📝 Abstract
In this paper, we propose a new theoretical approach to Explainable AI. Following the Scientific Method, this approach consists in formulating on the basis of empirical evidence, a mathematical model to explain and predict the behaviors of Neural Networks. We apply the method to a case study created in a controlled environment, which we call Prime Convolutional Model (p-Conv for short). p-Conv operates on a dataset consisting of the first one million natural numbers and is trained to identify the congruence classes modulo a given integer $m$. Its architecture uses a convolutional-type neural network that contextually processes a sequence of $B$ consecutive numbers to each input. We take an empirical approach and exploit p-Conv to identify the congruence classes of numbers in a validation set using different values for $m$ and $B$. The results show that the different behaviors of p-Conv (i.e., whether it can perform the task or not) can be modeled mathematically in terms of $m$ and $B$. The inferred mathematical model reveals interesting patterns able to explain when and why p-Conv succeeds in performing task and, if not, which error pattern it follows.
Problem

Research questions and friction points this paper is trying to address.

Develops a mathematical model for Explainable AI.
Explains Neural Network behavior using empirical evidence.
Identifies patterns in task success and error.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prime Convolutional Model for Explainable AI
Mathematical modeling of Neural Network behaviors
Empirical validation with congruence classes
🔎 Similar Papers
No similar papers found.
F
Francesco Panelli
Independent Researcher, Firenze, Italy
D
Doaa Almhaithawi
Politecnico di Torino, Department of Control and Computer Engineering, Torino, Italy
Tania Cerquitelli
Tania Cerquitelli
Full Professor, Dept. of Control and Computer Engineering,Politecnico di Torino
Automated Data ScienceExplainable AIMachine learningData managementBig data analytics
A
Alessandro Bellini
Mathema srl, Firenze, Italy