Tensorization of neural networks for improved privacy and interpretability

📅 2025-01-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of jointly optimizing privacy preservation, model interpretability, and parameter compression in neural networks. We propose a black-box Tensor Train (TT) construction algorithm based on randomized sketching and cross-interpolation, enabling efficient parameter tensorization from only a small number of sample points. To our knowledge, this is the first approach to employ TT decomposition for parameter obfuscation—enhancing privacy during both training and inference—while simultaneously supporting direct physical interpretation: topological phases of matter can be explicitly decoded from the learned TT representation. We further introduce a general-purpose initialization strategy tailored for efficient TT-based training. Experiments demonstrate that our method achieves a superior trade-off between memory footprint and computational latency, significantly outperforming conventional tensorization techniques in compression efficacy.

Technology Category

Application Category

📝 Abstract
We present a tensorization algorithm for constructing tensor train representations of functions, drawing on sketching and cross interpolation ideas. The method only requires black-box access to the target function and a small set of sample points defining the domain of interest. Thus, it is particularly well-suited for machine learning models, where the domain of interest is naturally defined by the training dataset. We show that this approach can be used to enhance the privacy and interpretability of neural network models. Specifically, we apply our decomposition to (i) obfuscate neural networks whose parameters encode patterns tied to the training data distribution, and (ii) estimate topological phases of matter that are easily accessible from the tensor train representation. Additionally, we show that this tensorization can serve as an efficient initialization method for optimizing tensor trains in general settings, and that, for model compression, our algorithm achieves a superior trade-off between memory and time complexity compared to conventional tensorization methods of neural networks.
Problem

Research questions and friction points this paper is trying to address.

Neural Network Optimization
Privacy Preservation
Explainable AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tensor Train Representation
Neural Network Security
Model Compression
🔎 Similar Papers
No similar papers found.
José Ramón Pareja Monturiol
José Ramón Pareja Monturiol
PhD student, Universidad Complutense de Madrid
Machine learningDeep learningDifferential privacyTensor Networks
Alejandro Pozas-Kerstjens
Alejandro Pozas-Kerstjens
Université de Genève
Relativistic Quantum InformationTensor NetworksNonlocalityQuantum NetworksMachine Learning
D
David P'erez-Garc'ia
Departamento de Análisis Matemático, Universidad Complutense de Madrid, 28040 Madrid, Spain; Instituto de Ciencias Matemáticas (CSIC-UAM-UC3M-UCM), 28049 Madrid, Spain