🤖 AI Summary
This work addresses the challenge of jointly optimizing privacy preservation, model interpretability, and parameter compression in neural networks. We propose a black-box Tensor Train (TT) construction algorithm based on randomized sketching and cross-interpolation, enabling efficient parameter tensorization from only a small number of sample points. To our knowledge, this is the first approach to employ TT decomposition for parameter obfuscation—enhancing privacy during both training and inference—while simultaneously supporting direct physical interpretation: topological phases of matter can be explicitly decoded from the learned TT representation. We further introduce a general-purpose initialization strategy tailored for efficient TT-based training. Experiments demonstrate that our method achieves a superior trade-off between memory footprint and computational latency, significantly outperforming conventional tensorization techniques in compression efficacy.
📝 Abstract
We present a tensorization algorithm for constructing tensor train representations of functions, drawing on sketching and cross interpolation ideas. The method only requires black-box access to the target function and a small set of sample points defining the domain of interest. Thus, it is particularly well-suited for machine learning models, where the domain of interest is naturally defined by the training dataset. We show that this approach can be used to enhance the privacy and interpretability of neural network models. Specifically, we apply our decomposition to (i) obfuscate neural networks whose parameters encode patterns tied to the training data distribution, and (ii) estimate topological phases of matter that are easily accessible from the tensor train representation. Additionally, we show that this tensorization can serve as an efficient initialization method for optimizing tensor trains in general settings, and that, for model compression, our algorithm achieves a superior trade-off between memory and time complexity compared to conventional tensorization methods of neural networks.