🤖 AI Summary
In Vector Symbolic Architectures (VSAs), the conventional clean-up operation requires pairwise similarity computation between a noisy query vector and all prototype vectors in an explicit codebook, resulting in O(N²) time complexity and O(N) space complexity—severely limiting scalability. This paper proposes an implicit codebook construction method leveraging Kronecker products and rotation matrices, eliminating the need for explicit codebook storage or exhaustive matching during clean-up. Our approach achieves, for the first time in VSAs, O(N log N) time complexity and O(log N) space complexity while preserving O(N) asymptotic memory capacity. Experimental results demonstrate that our clean-up procedure accelerates retrieval by several orders of magnitude over state-of-the-art VSA methods, enabling efficient large-scale vector search and key-value associative memory operations.
📝 Abstract
A computational bottleneck in current Vector-Symbolic Architectures (VSAs) is the ``clean-up''step, which decodes the noisy vectors retrieved from the architecture. Clean-up typically compares noisy vectors against a ``codebook''of prototype vectors, incurring computational complexity that is quadratic or similar. We present a new codebook representation that supports efficient clean-up, based on Kroneker products of rotation-like matrices. The resulting clean-up time complexity is linearithmic, i.e. $mathcal{O}(N, ext{log},N)$, where $N$ is the vector dimension and also the number of vectors in the codebook. Clean-up space complexity is $mathcal{O}(N)$. Furthermore, the codebook is not stored explicitly in computer memory: It can be represented in $mathcal{O}( ext{log},N)$ space, and individual vectors in the codebook can be materialized in $mathcal{O}(N)$ time and space. At the same time, asymptotic memory capacity remains comparable to standard approaches. Computer experiments confirm these results, demonstrating several orders of magnitude more scalability than baseline VSA techniques.