LuxIA: A Lightweight Unitary matriX-based Framework Built on an Iterative Algorithm for Photonic Neural Network Training

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the prohibitive memory and computational overhead—as well as poor scalability—arising from transmission matrix computation in large-scale photonic neural network (PNN) training, this paper proposes an iterative slicing-based lightweight unitary matrix computation framework. The method innovatively decomposes large-scale unitary matrices into small, iteratively updated sub-blocks, enabling end-to-end backpropagation while ensuring linear memory growth and efficient gradient computation. By integrating explicit unitary constraint modeling with parameterized photonic circuit simulation, the framework enables, for the first time, full-stack training of PNNs comprising thousands of optical units. Experiments on MNIST, Digits, and Olivetti Faces demonstrate a 3.2–8.7× speedup in training and a reduction in memory consumption to one-fifth of prior approaches, thereby substantially overcoming key scalability bottlenecks in PNN training.

Technology Category

Application Category

📝 Abstract
PNNs present promising opportunities for accelerating machine learning by leveraging the unique benefits of photonic circuits. However, current state of the art PNN simulation tools face significant scalability challenges when training large-scale PNNs, due to the computational demands of transfer matrix calculations, resulting in high memory and time consumption. To overcome these limitations, we introduce the Slicing method, an efficient transfer matrix computation approach compatible with back-propagation. We integrate this method into LuxIA, a unified simulation and training framework. The Slicing method substantially reduces memory usage and execution time, enabling scalable simulation and training of large PNNs. Experimental evaluations across various photonic architectures and standard datasets, including MNIST, Digits, and Olivetti Faces, show that LuxIA consistently surpasses existing tools in speed and scalability. Our results advance the state of the art in PNN simulation, making it feasible to explore and optimize larger, more complex architectures. By addressing key computational bottlenecks, LuxIA facilitates broader adoption and accelerates innovation in AI hardware through photonic technologies. This work paves the way for more efficient and scalable photonic neural network research and development.
Problem

Research questions and friction points this paper is trying to address.

Reduces memory and time for large-scale photonic neural network training
Enables scalable simulation of complex photonic architectures efficiently
Overcomes computational bottlenecks in photonic neural network simulation tools
Innovation

Methods, ideas, or system contributions that make the work stand out.

Slicing method reduces memory and time for matrix calculations
LuxIA framework integrates Slicing for scalable photonic network training
Experimental results show LuxIA outperforms existing tools in speed
🔎 Similar Papers
No similar papers found.
T
Tzamn Melendez Carmona
Department of Control and Computer Engineering, Politecnico di Torino, Turin, Italy
F
Federico Marchesin
Photonics Research Group, Ghent University - imec, Ghent, Belgium
M
Marco P. Abrate
Department of Cell and Developmental Biology, University College London, London, UK
Peter Bienstman
Peter Bienstman
Ghent University
photonics
Stefano Di Carlo
Stefano Di Carlo
Full Professor, Politecnico di Torino
testreliabilitybioinformaticscybersecurityneuromorphic computing
Alessandro Savino
Alessandro Savino
Associate Professor - Politecnico di Torino, DAUIN
DependabilityEdge ComputingApproximate ComputingComputing ArchitecturesBioinformatics