Torus embeddings

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a structural mismatch between conventional deep learning embeddings—typically designed in Euclidean or hyperspherical spaces—and the toroidal topology inherent in hardware-based integer representations with overflow, which leads to inefficient use of representational capacity. To bridge this gap, the paper introduces toroidal topology directly into deep embedding design for the first time. By modifying mainstream deep learning frameworks, the authors construct embeddings with intrinsic toroidal structure and propose a tailored normalization strategy to ensure training stability while preserving favorable quantization properties. Experimental results demonstrate that the proposed toroidal embeddings achieve performance on par with hyperspherical counterparts, while offering superior training stability and quantization efficiency, thereby providing a simple yet effective solution for resource-constrained TinyML deployments.

Technology Category

Application Category

📝 Abstract
Many data representations are vectors of continuous values. In particular, deep learning embeddings are data-driven representations, typically either unconstrained in Euclidean space, or constrained to a hypersphere. These may also be translated into integer representations (quantised) for efficient large-scale use. However, the fundamental (and most efficient) numeric representation in the overwhelming majority of existing computers is integers with overflow -- and vectors of these integers do not correspond to either of these spaces, but instead to the topology of a (hyper)torus. This mismatch can lead to wasted representation capacity. Here we show that common deep learning frameworks can be adapted, quite simply, to create representations with inherent toroidal topology. We investigate two alternative strategies, demonstrating that a normalisation-based strategy leads to training with desirable stability and performance properties, comparable to a standard hyperspherical L2 normalisation. We also demonstrate that a torus embedding maintains desirable quantisation properties. The torus embedding does not outperform hypersphere embeddings in general, but is comparable, and opens the possibility to train deep embeddings which have an extremely simple pathway to efficient `TinyML' embedded implementation.
Problem

Research questions and friction points this paper is trying to address.

torus embeddings
representation capacity
quantisation
deep learning embeddings
topological mismatch
Innovation

Methods, ideas, or system contributions that make the work stand out.

torus embedding
quantisation
TinyML
topological representation
normalisation
🔎 Similar Papers
No similar papers found.