🤖 AI Summary
Automated analog circuit topology synthesis faces severe bottlenecks in generation efficiency and constraint compliance due to an exponentially large search space and stringent design constraints.
Method: This paper proposes a two-stage LLM-driven framework integrating instruction tuning and proximal policy optimization (PPO)-based reinforcement learning (RL). It innovatively introduces RL into circuit topology generation, employing a learnable multi-objective reward model that jointly optimizes functional correctness, structural efficiency, and output voltage accuracy—enabling end-to-end optimization.
Contribution/Results: Compared to the best baseline, our method improves valid circuit generation rate by 12%, synthesis efficiency by 14%, and reduces topology redundancy by 38%; under few-shot settings, successful valid synthesis exceeds 60%. This work is the first to empirically validate the feasibility and superiority of RL-augmented LLMs for hardware topology generation, significantly enhancing generalization capability and hard-constraint satisfaction rate.
📝 Abstract
Analog circuit topology synthesis is integral to Electronic Design Automation (EDA), enabling the automated creation of circuit structures tailored to specific design requirements. However, the vast design search space and strict constraint adherence make efficient synthesis challenging. Leveraging the versatility of Large Language Models (LLMs), we propose AUTOCIRCUIT-RL,a novel reinforcement learning (RL)-based framework for automated analog circuit synthesis. The framework operates in two phases: instruction tuning, where an LLM learns to generate circuit topologies from structured prompts encoding design constraints, and RL refinement, which further improves the instruction-tuned model using reward models that evaluate validity, efficiency, and output voltage. The refined model is then used directly to generate topologies that satisfy the design constraints. Empirical results show that AUTOCIRCUIT-RL generates ~12% more valid circuits and improves efficiency by ~14% compared to the best baselines, while reducing duplicate generation rates by ~38%. It achieves over 60% success in synthesizing valid circuits with limited training data, demonstrating strong generalization. These findings highlight the framework's effectiveness in scaling to complex circuits while maintaining efficiency and constraint adherence, marking a significant advancement in AI-driven circuit design.