A Hardware-Efficient Photonic Tensor Core: Accelerating Deep Neural Networks with Structured Compression

📅 2025-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address scalability bottlenecks—namely large footprint, high electro-optic interface cost, and complex control—in photonic integrated circuits (PICs) for general matrix multiplication (GEMM), this work proposes the Circulant Photonic Tensor Core (CirPTC), a block-circulant structured photonic accelerator tailored for structured-compression photonic neural networks. Its core innovation lies in the first hardware-efficient block-circulant photonic tensor core, jointly optimized via structure-aware weight compression and hardware-in-the-loop training to preserve representational capacity while compensating for on-chip non-idealities. Experiments demonstrate a 74.91% parameter reduction, a computational density of 5.84 TOPS/mm², and an energy efficiency of 47.94 TOPS/W—6.87× higher than the baseline—without significant accuracy degradation in image classification. CirPTC thus substantially advances the efficiency and scalability limits of optical GEMM.

Technology Category

Application Category

📝 Abstract
Recent advancements in artificial intelligence (AI) and deep neural networks (DNNs) have revolutionized numerous fields, enabling complex tasks by extracting intricate features from large datasets. However, the exponential growth in computational demands has outstripped the capabilities of traditional electrical hardware accelerators. Optical computing offers a promising alternative due to its inherent advantages of parallelism, high computational speed, and low power consumption. Yet, current photonic integrated circuits (PICs) designed for general matrix multiplication (GEMM) are constrained by large footprints, high costs of electro-optical (E-O) interfaces, and high control complexity, limiting their scalability. To overcome these challenges, we introduce a block-circulant photonic tensor core (CirPTC) for a structure-compressed optical neural network (StrC-ONN) architecture. By applying a structured compression strategy to weight matrices, StrC-ONN significantly reduces model parameters and hardware requirements while preserving the universal representability of networks and maintaining comparable expressivity. Additionally, we propose a hardware-aware training framework to compensate for on-chip nonidealities to improve model robustness and accuracy. We experimentally demonstrate image processing and classification tasks, achieving up to a 74.91% reduction in trainable parameters while maintaining competitive accuracies. Performance analysis expects a computational density of 5.84 tera operations per second (TOPS) per mm^2 and a power efficiency of 47.94 TOPS/W, marking a 6.87-times improvement achieved through the hardware-software co-design approach. By reducing both hardware requirements and control complexity across multiple dimensions, this work explores a new pathway to push the limits of optical computing in the pursuit of high efficiency and scalability.
Problem

Research questions and friction points this paper is trying to address.

Optimizing photonic tensor cores for deep neural networks
Reducing hardware footprint and control complexity in optical computing
Enhancing computational density and power efficiency in StrC-ONN architecture
Innovation

Methods, ideas, or system contributions that make the work stand out.

block-circulant photonic tensor core
structured compression strategy
hardware-aware training framework
🔎 Similar Papers
No similar papers found.
S
Shupeng Ning
Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas 78758, United States
Hanqing Zhu
Hanqing Zhu
University of Texas at Austin
Hardware/System-aware AIHardware for AI
C
Chenghao Feng
Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas 78758, United States
J
Jiaqi Gu
School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, Arizona 85281, United States
David Z. Pan
David Z. Pan
Professor, Silicon Labs Endowed Chair, ECE Dept., University of Texas at Austin
Electronic Design AutomationDesign for ManufacturingVLSIHardwareMachine Learning
R
Ray T. Chen
Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas 78758, United States