Transferable Parasitic Estimation via Graph Contrastive Learning and Label Rebalancing in AMS Circuits

πŸ“… 2025-07-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenges of label scarcity, class imbalance, and structural heterogeneity in parasitic parameter estimation for analog/mixed-signal (AMS) circuits, this paper proposes CircuitGCLβ€”a novel graph contrastive learning framework. It introduces hyperspherical representation dispersion for circuit graph learning, enabling topology-invariant node embeddings for the first time. By jointly leveraging self-supervised graph contrastive learning and a dual-balanced loss (balanced MSE + balanced softmax cross-entropy), CircuitGCL mitigates inter-circuit label bias and enhances generalization and transferability. Evaluated on TSMC 28nm process data, CircuitGCL achieves a 33.6%–44.2% improvement in RΒ² for edge-level parasitic capacitance estimation and boosts node-level ground capacitance classification F1-score by 0.9×–2.1Γ— over state-of-the-art methods, demonstrating significant performance gains.

Technology Category

Application Category

πŸ“ Abstract
Graph representation learning on Analog-Mixed Signal (AMS) circuits is crucial for various downstream tasks, e.g., parasitic estimation. However, the scarcity of design data, the unbalanced distribution of labels, and the inherent diversity of circuit implementations pose significant challenges to learning robust and transferable circuit representations. To address these limitations, we propose CircuitGCL, a novel graph contrastive learning framework that integrates representation scattering and label rebalancing to enhance transferability across heterogeneous circuit graphs. CircuitGCL employs a self-supervised strategy to learn topology-invariant node embeddings through hyperspherical representation scattering, eliminating dependency on large-scale data. Simultaneously, balanced mean squared error (MSE) and softmax cross-entropy (bsmCE) losses are introduced to mitigate label distribution disparities between circuits, enabling robust and transferable parasitic estimation. Evaluated on parasitic capacitance estimation (edge-level task) and ground capacitance classification (node-level task) across TSMC 28nm AMS designs, CircuitGCL outperforms all state-of-the-art (SOTA) methods, with the $R^2$ improvement of $33.64% sim 44.20%$ for edge regression and F1-score gain of $0.9 imes sim 2.1 imes$ for node classification. Our code is available at href{https://anonymous.4open.science/r/CircuitGCL-099B/README.md}{here}.
Problem

Research questions and friction points this paper is trying to address.

Learning robust circuit representations with scarce design data
Addressing unbalanced label distribution in AMS circuits
Enhancing transferability across diverse circuit implementations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph contrastive learning for circuit representation
Hyperspherical representation scattering technique
Balanced MSE and softmax cross-entropy losses
πŸ”Ž Similar Papers
No similar papers found.