The Tsetlin Machine Goes Deep: Logical Learning and Reasoning With Graphs

📅 2025-07-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of interpretable deep learning for graph-structured data. We propose GraphTM, the first deep logic learning framework tailored for graphs, extending the Tsetlin Machine to deep graph learning via graph message passing. GraphTM dynamically constructs nested, hierarchical logical clauses, enabling unified modeling of sequential, grid, relational, and multimodal graph inputs. Its core contribution lies in replacing opaque neural computations with human-readable logical rules—thereby drastically reducing clause complexity while enhancing generalization and noise robustness. Experiments demonstrate that GraphTM improves classification accuracy by 3.86% on CIFAR-10, outperforms reinforcement learning baselines by 20.6% on action coreference tracking, exhibits superior robustness over GCNs in recommendation tasks, and achieves 2.5× faster training with comparable accuracy in genomic sequence analysis.

Technology Category

Application Category

📝 Abstract
Pattern recognition with concise and flat AND-rules makes the Tsetlin Machine (TM) both interpretable and efficient, while the power of Tsetlin automata enables accuracy comparable to deep learning on an increasing number of datasets. We introduce the Graph Tsetlin Machine (GraphTM) for learning interpretable deep clauses from graph-structured input. Moving beyond flat, fixed-length input, the GraphTM gets more versatile, supporting sequences, grids, relations, and multimodality. Through message passing, the GraphTM builds nested deep clauses to recognize sub-graph patterns with exponentially fewer clauses, increasing both interpretability and data utilization. For image classification, GraphTM preserves interpretability and achieves 3.86%-points higher accuracy on CIFAR-10 than a convolutional TM. For tracking action coreference, faced with increasingly challenging tasks, GraphTM outperforms other reinforcement learning methods by up to 20.6%-points. In recommendation systems, it tolerates increasing noise to a greater extent than a Graph Convolutional Neural Network (GCN), e.g., for noise ratio 0.1, GraphTM obtains accuracy 89.86% compared to GCN's 70.87%. Finally, for viral genome sequence data, GraphTM is competitive with BiLSTM-CNN and GCN accuracy-wise, training 2.5x faster than GCN. The GraphTM's application to these varied fields demonstrates how graph representation learning and deep clauses bring new possibilities for TM learning.
Problem

Research questions and friction points this paper is trying to address.

Learning interpretable deep clauses from graph-structured input
Recognizing sub-graph patterns with exponentially fewer clauses
Improving accuracy and interpretability in diverse applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph Tsetlin Machine for deep interpretable clauses
Message passing for recognizing sub-graph patterns
Versatile support for sequences, grids, and multimodality
🔎 Similar Papers
No similar papers found.
Ole-Christoffer Granmo
Ole-Christoffer Granmo
Professor University of Agder
Machine Learning
Youmna Abdelwahab
Youmna Abdelwahab
Phd student, UIA
NLPDeep learningTsetlin Automata
Per-Arne Andersen
Per-Arne Andersen
Associate Professor at University of Agder
Reinforcement LearningMachine LearningDeep LearningCybersecurityTsetlin Machine
P
Paul F. A. Clarke
University of Agder
K
Kunal Dumbre
University of Agder
Y
Ylva Grønninsæter
University of Agder
Vojtech Halenka
Vojtech Halenka
University of Agder
Machine Learning
R
Runar Helin
University of Agder
L
Lei Jiao
University of Agder
A
Ahmed Khalid
University of Agder
R
Rebekka Omslandseter
University of Agder
Rupsa Saha
Rupsa Saha
Department of ICT, University of Agder
Machine LearningImage ProcessingNatural Language Processing
M
Mayur Shende
University of Agder
X
Xuan Zhang
University of Agder, NORCE