Developing a Foundation of Vector Symbolic Architectures Using Category Theory

📅 2025-01-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vector Symbolic Architectures (VSAs) have long suffered from a lack of formal mathematical foundations, hindering their integration with neural networks and limiting their utility in explainable AI. Method: This paper introduces category theory—specifically, a Lawvere metric space-based enriched category framework—to axiomatize VSAs as division rigs in the monoidal closed category **Met** of metric spaces and 1-Lipschitz maps. Contribution/Results: This formalization unifies the algebraic structure of symbolic operations with continuous, differentiable computation, enabling a theoretical leap from empirical design to principled generalization. It rigorously characterizes the cognitive-semantic nature of VSAs and delineates their compositional boundaries. Moreover, it establishes a rigorous mathematical foundation for extending VSAs to differentiable reasoning, neurosymbolic integration, and explainable AI—thereby bridging symbolic abstraction and subsymbolic learning within a coherent categorical framework.

Technology Category

Application Category

📝 Abstract
At the risk of overstating the case, connectionist approaches to machine learning, i.e. neural networks, are enjoying a small vogue right now. However, these methods require large volumes of data and produce models that are uninterpretable to humans. An alternative framework that is compatible with neural networks and gradient-based learning, but explicitly models compositionality, is Vector Symbolic Architectures (VSAs). VSAs are a family of algebras on high-dimensional vector representations. They arose in cognitive science from the need to unify neural processing and the kind of symbolic reasoning that humans perform. While machine learning methods have benefited from category theoretical analyses, VSAs have not yet received similar treatment. In this paper, we present a first attempt at applying category theory to VSAs. Specifically, we conduct a brief literature survey demonstrating the lacking intersection of these two topics, provide a list of desiderata for VSAs, and propose that VSAs may be understood as a (division) rig in a category enriched over a monoid in Met (the category of Lawvere metric spaces). This final contribution suggests that VSAs may be generalised beyond current implementations. It is our hope that grounding VSAs in category theory will lead to more rigorous connections with other research, both within and beyond, learning and cognition.
Problem

Research questions and friction points this paper is trying to address.

Categorical Theory
Vector Symbolic Architectures
Neural Network Compatibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Category Theory
Vector Symbolic Architectures
Mathematical Division Algebras
🔎 Similar Papers
No similar papers found.
N
N. Shaw
Cheriton School of Computer Science, University of Waterloo
P
P. M. Furlong
Centre for Theoretical Neuroscience, Systems Design Engineering, University of Waterloo
Britt Anderson
Britt Anderson
University of Waterloo
Jeff Orchard
Jeff Orchard
Associate Professor, Cheriton School of Computer Science, University of Waterloo
computational neuroscienceimage processing