🤖 AI Summary
This work addresses the challenge of interpretable deep learning for graph-structured data. We propose GraphTM, the first deep logic learning framework tailored for graphs, extending the Tsetlin Machine to deep graph learning via graph message passing. GraphTM dynamically constructs nested, hierarchical logical clauses, enabling unified modeling of sequential, grid, relational, and multimodal graph inputs. Its core contribution lies in replacing opaque neural computations with human-readable logical rules—thereby drastically reducing clause complexity while enhancing generalization and noise robustness. Experiments demonstrate that GraphTM improves classification accuracy by 3.86% on CIFAR-10, outperforms reinforcement learning baselines by 20.6% on action coreference tracking, exhibits superior robustness over GCNs in recommendation tasks, and achieves 2.5× faster training with comparable accuracy in genomic sequence analysis.
📝 Abstract
Pattern recognition with concise and flat AND-rules makes the Tsetlin Machine (TM) both interpretable and efficient, while the power of Tsetlin automata enables accuracy comparable to deep learning on an increasing number of datasets. We introduce the Graph Tsetlin Machine (GraphTM) for learning interpretable deep clauses from graph-structured input. Moving beyond flat, fixed-length input, the GraphTM gets more versatile, supporting sequences, grids, relations, and multimodality. Through message passing, the GraphTM builds nested deep clauses to recognize sub-graph patterns with exponentially fewer clauses, increasing both interpretability and data utilization. For image classification, GraphTM preserves interpretability and achieves 3.86%-points higher accuracy on CIFAR-10 than a convolutional TM. For tracking action coreference, faced with increasingly challenging tasks, GraphTM outperforms other reinforcement learning methods by up to 20.6%-points. In recommendation systems, it tolerates increasing noise to a greater extent than a Graph Convolutional Neural Network (GCN), e.g., for noise ratio 0.1, GraphTM obtains accuracy 89.86% compared to GCN's 70.87%. Finally, for viral genome sequence data, GraphTM is competitive with BiLSTM-CNN and GCN accuracy-wise, training 2.5x faster than GCN. The GraphTM's application to these varied fields demonstrates how graph representation learning and deep clauses bring new possibilities for TM learning.