Kolmogorov-Arnold Network for Transistor Compact Modeling

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional neural networks employed in transistor compact modeling are largely black-box models, lacking physical interpretability and thus limiting their utility in device physics analysis and process optimization. To address this, we introduce the Kolmogorov–Arnold Network (KAN) to compact modeling for the first time, proposing an end-to-end interpretable framework. Our method integrates Fourier-basis-enhanced KANs and establishes a SPICE-compatible training and validation pipeline; crucially, it enables automatic symbolic formula extraction from learned weights, unifying model interpretability with circuit analyzability. Experimental results demonstrate superior accuracy over both multilayer perceptrons (MLPs) and industrial-standard models on key metrics—including FinFET gate current and drain/source charge—while preserving physical fidelity. This work breaks the interpretability bottleneck in AI-driven semiconductor device modeling, significantly enhancing physical insight and accelerating process iteration.

Technology Category

Application Category

📝 Abstract
Neural network (NN)-based transistor compact modeling has recently emerged as a transformative solution for accelerating device modeling and SPICE circuit simulations. However, conventional NN architectures, despite their widespread adoption in state-of-the-art methods, primarily function as black-box problem solvers. This lack of interpretability significantly limits their capacity to extract and convey meaningful insights into learned data patterns, posing a major barrier to their broader adoption in critical modeling tasks. This work introduces, for the first time, Kolmogorov-Arnold network (KAN) for the transistor - a groundbreaking NN architecture that seamlessly integrates interpretability with high precision in physics-based function modeling. We systematically evaluate the performance of KAN and Fourier KAN for FinFET compact modeling, benchmarking them against the golden industry-standard compact model and the widely used MLP architecture. Our results reveal that KAN and FKAN consistently achieve superior prediction accuracy for critical figures of merit, including gate current, drain charge, and source charge. Furthermore, we demonstrate and improve the unique ability of KAN to derive symbolic formulas from learned data patterns - a capability that not only enhances interpretability but also facilitates in-depth transistor analysis and optimization. This work highlights the transformative potential of KAN in bridging the gap between interpretability and precision in NN-driven transistor compact modeling. By providing a robust and transparent approach to transistor modeling, KAN represents a pivotal advancement for the semiconductor industry as it navigates the challenges of advanced technology scaling.
Problem

Research questions and friction points this paper is trying to address.

Enhance interpretability in neural network-based transistor modeling.
Improve precision in physics-based function modeling for transistors.
Derive symbolic formulas from learned data patterns for analysis.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Kolmogorov-Arnold Network enhances interpretability in modeling.
KAN integrates physics-based precision with neural networks.
KAN derives symbolic formulas from learned data patterns.
🔎 Similar Papers
No similar papers found.
R
Rodion Novkin
Technical University of Munich; TUM School of Computation, Information and Technology; Chair of AI Processor Design; Munich Institute of Robotics and Machine Intelligence, Munich, Germany
Hussam Amrouch
Hussam Amrouch
Professor (W3) of AI Processor Design, Technical University of Munich
AI AccelerationASIC Processor DesignEmerging TechnologyBrain-inspired ComputingML-CAD