🤖 AI Summary
Analog/mixed-signal (AMS) circuit design faces dual challenges: scarcity of labeled data and difficulty embedding domain knowledge into automated design flows. Traditional black-box optimization achieves high sampling efficiency but lacks interpretability, often wasting evaluations in low-value regions; learning-based methods incorporate structural priors yet suffer from poor generalization and high retraining costs; LLM-based approaches rely heavily on manual intervention and lack transparency.
Method: We propose the first end-to-end topology-aware framework that jointly leverages graph algorithms for automated netlist hierarchical parsing, an LLM agent for interpretable, hypothesis–validation–refinement circuit annotation, and Bayesian optimization—with LLM-guided initial sampling and dynamic confidence-region updates upon convergence stagnation.
Contribution/Results: Experiments demonstrate significant improvements in optimization efficiency and feasibility guarantee, while achieving strong generalization across diverse AMS topologies and full decision transparency.
📝 Abstract
Analog and mixed-signal circuit design remains challenging due to the shortage of high-quality data and the difficulty of embedding domain knowledge into automated flows. Traditional black-box optimization achieves sampling efficiency but lacks circuit understanding, which often causes evaluations to be wasted in low-value regions of the design space. In contrast, learning-based methods embed structural knowledge but are case-specific and costly to retrain. Recent attempts with large language models show potential, yet they often rely on manual intervention, limiting generality and transparency. We propose TopoSizing, an end-to-end framework that performs robust circuit understanding directly from raw netlists and translates this knowledge into optimization gains. Our approach first applies graph algorithms to organize circuits into a hierarchical device-module-stage representation. LLM agents then execute an iterative hypothesis-verification-refinement loop with built-in consistency checks, producing explicit annotations. Verified insights are integrated into Bayesian optimization through LLM-guided initial sampling and stagnation-triggered trust-region updates, improving efficiency while preserving feasibility.