🤖 AI Summary
This work addresses the challenge of automatically discovering high-performance reconfigurable power converters within an exponentially large circuit topology space—a task traditionally hindered by reliance on expert intuition or limited to small-scale, inadequately validated designs. The authors propose an analytical-guided evolutionary discovery framework that enables theoretical performance evaluation without requiring component parameters or SPICE simulations. By co-optimizing the generative model and its training distribution, the approach effectively mitigates mode collapse and overfitting. The method significantly outperforms existing techniques in terms of syntactic validity, functional correctness, novelty, and performance. Notably, it discovers a novel 8-mode reconfigurable converter achieving a 23% improvement in figure of merit (FoM), with SPICE simulations confirming a 10% average efficiency gain and up to a 17% improvement in peak single-mode efficiency.
📝 Abstract
Discovering superior circuit topologies requires navigating an exponentially large design space-a challenge traditionally reserved for human experts. Existing AI methods either select from predefined templates or generate novel topologies at a limited scale without rigorous verification, leaving large-scale performance-driven discovery underexplored. We present PowerGenie, a framework for automated discovery of higher-performance reconfigurable power converters at scale. PowerGenie introduces: (1) an automated analytical framework that determines converter functionality and theoretical performance limits without component sizing or SPICE simulation, and (2) an evolutionary finetuning method that co-evolves a generative model with its training distribution through fitness selection and uniqueness verification. Unlike existing methods that suffer from mode collapse and overfitting, our approach achieves higher syntax validity, function validity, novelty rate, and figure-of-merit (FoM). PowerGenie discovers a novel 8-mode reconfigurable converter with 23% higher FoM than the best training topology. SPICE simulations confirm average absolute efficiency gains of 10% across 8 modes and up to 17% at a single mode. Code will be released upon publication.