π€ AI Summary
To address the challenges of label scarcity, class imbalance, and structural heterogeneity in parasitic parameter estimation for analog/mixed-signal (AMS) circuits, this paper proposes CircuitGCLβa novel graph contrastive learning framework. It introduces hyperspherical representation dispersion for circuit graph learning, enabling topology-invariant node embeddings for the first time. By jointly leveraging self-supervised graph contrastive learning and a dual-balanced loss (balanced MSE + balanced softmax cross-entropy), CircuitGCL mitigates inter-circuit label bias and enhances generalization and transferability. Evaluated on TSMC 28nm process data, CircuitGCL achieves a 33.6%β44.2% improvement in RΒ² for edge-level parasitic capacitance estimation and boosts node-level ground capacitance classification F1-score by 0.9Γβ2.1Γ over state-of-the-art methods, demonstrating significant performance gains.
π Abstract
Graph representation learning on Analog-Mixed Signal (AMS) circuits is crucial for various downstream tasks, e.g., parasitic estimation. However, the scarcity of design data, the unbalanced distribution of labels, and the inherent diversity of circuit implementations pose significant challenges to learning robust and transferable circuit representations. To address these limitations, we propose CircuitGCL, a novel graph contrastive learning framework that integrates representation scattering and label rebalancing to enhance transferability across heterogeneous circuit graphs. CircuitGCL employs a self-supervised strategy to learn topology-invariant node embeddings through hyperspherical representation scattering, eliminating dependency on large-scale data. Simultaneously, balanced mean squared error (MSE) and softmax cross-entropy (bsmCE) losses are introduced to mitigate label distribution disparities between circuits, enabling robust and transferable parasitic estimation. Evaluated on parasitic capacitance estimation (edge-level task) and ground capacitance classification (node-level task) across TSMC 28nm AMS designs, CircuitGCL outperforms all state-of-the-art (SOTA) methods, with the $R^2$ improvement of $33.64% sim 44.20%$ for edge regression and F1-score gain of $0.9 imes sim 2.1 imes$ for node classification. Our code is available at href{https://anonymous.4open.science/r/CircuitGCL-099B/README.md}{here}.