Few-shot Learning on AMS Circuits and Its Application to Parasitic Capacitance Prediction

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of labeled training data for parasitic capacitance prediction in analog/mixed-signal (AMS) circuits, this paper proposes CircuitGPS—the first few-shot learning framework tailored for circuit parasitic effect prediction. Methodologically, it models circuit netlists as heterogeneous graphs and integrates link prediction-based pretraining, edge regression fine-tuning, subgraph embedding learning, and lightweight positional encoding. Innovatively, it combines short-hop sampling with a hybrid graph Transformer to enable zero-shot transferability and strong scalability. Experimental results across diverse circuit design tasks demonstrate that CircuitGPS improves capacitance existence classification accuracy by ≥20% and reduces mean absolute error (MAE) in capacitance value estimation by ≥0.067 pF, significantly outperforming state-of-the-art approaches.

Technology Category

Application Category

📝 Abstract
Graph representation learning is a powerful method to extract features from graph-structured data, such as analog/mixed-signal (AMS) circuits. However, training deep learning models for AMS designs is severely limited by the scarcity of integrated circuit design data. In this work, we present CircuitGPS, a few-shot learning method for parasitic effect prediction in AMS circuits. The circuit netlist is represented as a heterogeneous graph, with the coupling capacitance modeled as a link. CircuitGPS is pre-trained on link prediction and fine-tuned on edge regression. The proposed method starts with a small-hop sampling technique that converts a link or a node into a subgraph. Then, the subgraph embeddings are learned with a hybrid graph Transformer. Additionally, CircuitGPS integrates a low-cost positional encoding that summarizes the positional and structural information of the sampled subgraph. CircuitGPS improves the accuracy of coupling existence by at least 20% and reduces the MAE of capacitance estimation by at least 0.067 compared to existing methods. Our method demonstrates strong inherent scalability, enabling direct application to diverse AMS circuit designs through zero-shot learning. Furthermore, the ablation studies provide valuable insights into graph models for representation learning.
Problem

Research questions and friction points this paper is trying to address.

Predicts parasitic capacitance in AMS circuits using few-shot learning
Addresses data scarcity in deep learning for AMS circuit designs
Improves accuracy and reduces error in capacitance estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Few-shot learning for AMS parasitic capacitance prediction
Heterogeneous graph representation with hybrid Transformer
Low-cost positional encoding for subgraph information
🔎 Similar Papers
No similar papers found.
Shan Shen
Shan Shen
Baylor College of Medicine
Neuroscience
Y
Yibin Zhang
Dept. Computer Science & Tech., BNRist, Tsinghua Univ., Beijing 100084, China
H
Hector Rodriguez Rodriguez
Dept. Computer Science & Tech., BNRist, Tsinghua Univ., Beijing 100084, China
Wenjian Yu
Wenjian Yu
Dept. Computer Science & Technology, Tsinghua University
Numerical computingEDAAlgorithmData mining and machine learning