GENIE-ASI: Generative Instruction and Executable Code for Analog Subcircuit Identification

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the reliance on manual expertise or large-scale labeled datasets for analog subcircuit identification, this paper proposes the first training-free large language model (LLM)-based approach: leveraging in-context learning to generate natural-language instructions, which are automatically translated into executable code for end-to-end subcircuit recognition in SPICE netlists. Key contributions include: (1) the first application of LLMs to automated analog subcircuit identification; (2) enhanced generalization via a code-generation paradigm, eliminating dependence on hand-crafted rules or annotated data; and (3) the construction of the first benchmark dataset dedicated to operational amplifier identification. Experimental results demonstrate F1 scores of 1.0, 0.81, and 0.31 on simple, medium-complexity, and complex circuit structures, respectively—validating the feasibility and promise of LLMs in analog design automation.

Technology Category

Application Category

📝 Abstract
Analog subcircuit identification is a core task in analog design, essential for simulation, sizing, and layout. Traditional methods often require extensive human expertise, rule-based encoding, or large labeled datasets. To address these challenges, we propose GENIE-ASI, the first training-free, large language model (LLM)-based methodology for analog subcircuit identification. GENIE-ASI operates in two phases: it first uses in-context learning to derive natural language instructions from a few demonstration examples, then translates these into executable Python code to identify subcircuits in unseen SPICE netlists. In addition, to evaluate LLM-based approaches systematically, we introduce a new benchmark composed of operational amplifier netlists (op-amps) that cover a wide range of subcircuit variants. Experimental results on the proposed benchmark show that GENIE-ASI matches rule-based performance on simple structures (F1-score = 1.0), remains competitive on moderate abstractions (F1-score = 0.81), and shows potential even on complex subcircuits (F1-score = 0.31). These findings demonstrate that LLMs can serve as adaptable, general-purpose tools in analog design automation, opening new research directions for foundation model applications in analog design automation.
Problem

Research questions and friction points this paper is trying to address.

Automating analog subcircuit identification without training data
Reducing reliance on human expertise in circuit analysis
Generating executable code from natural language instructions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free LLM methodology for identification
In-context learning generates executable Python code
New benchmark for systematic LLM evaluation
🔎 Similar Papers
No similar papers found.