A Methodology for Transparent Logic-Based Classification Using a Multi-Task Convolutional Tsetlin Machine

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of balancing interpretability and performance in large-scale, multi-channel RGB image classification. We propose a novel multi-task Convolutional Tsetlin Machine (TM) architecture grounded in finite-state automata and propositional logic, jointly learning local predictive explanations (clause-level) and global class representations (class-level logical patterns), with convolutional clauses visualized to enhance transparency. Our approach is the first to achieve both high interpretability and competitive accuracy on complex real-world data (CelebA). It achieves 98.5% accuracy on MNIST and an F1-score of 86.56% on CelebA—comparable to ResNet50—while preserving full logical transparency. This work establishes a scalable, interpretable, and practically viable paradigm for logic-driven transparent AI in image classification.

Technology Category

Application Category

📝 Abstract
The Tsetlin Machine (TM) is a novel machine learning paradigm that employs finite-state automata for learning and utilizes propositional logic to represent patterns. Due to its simplistic approach, TMs are inherently more interpretable than learning algorithms based on Neural Networks. The Convolutional TM has shown comparable performance on various datasets such as MNIST, K-MNIST, F-MNIST and CIFAR-2. In this paper, we explore the applicability of the TM architecture for large-scale multi-channel (RGB) image classification. We propose a methodology to generate both local interpretations and global class representations. The local interpretations can be used to explain the model predictions while the global class representations aggregate important patterns for each class. These interpretations summarize the knowledge captured by the convolutional clauses, which can be visualized as images. We evaluate our methods on MNIST and CelebA datasets, using models that achieve 98.5% accuracy on MNIST and 86.56% F1-score on CelebA (compared to 88.07% for ResNet50) respectively. We show that the TM performs competitively to this deep learning model while maintaining its interpretability, even in large-scale complex training environments. This contributes to a better understanding of TM clauses and provides insights into how these models can be applied to more complex and diverse datasets.
Problem

Research questions and friction points this paper is trying to address.

Developing transparent logic-based classification using multi-task convolutional Tsetlin Machine
Generating local interpretations and global representations for model predictions
Maintaining interpretability while achieving competitive performance on complex datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-task convolutional Tsetlin Machine for classification
Propositional logic representation for interpretable patterns
Local and global visual explanations from clauses
🔎 Similar Papers
No similar papers found.
M
Mayur Kishor Shende
Dept. of ICT, University of Agder, Grimstad, Norway
Ole-Christoffer Granmo
Ole-Christoffer Granmo
Professor University of Agder
Machine Learning
R
Runar Helin
Dept. of ICT, University of Agder, Grimstad, Norway
V
Vladimir I. Zadorozhny
School of Computing and Information, University of Pittsburgh, Pittsburgh, USA
Rishad Shafik
Rishad Shafik
Professor of Microelectronic Systems, Newcastle University, UK
Machine Learning HardwareEnergy-Aware ComputingHW/SW Co-design