POET: Power-Oriented Evolutionary Tuning for LLM-Based RTL PPA Optimization

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of ensuring functional correctness and prioritizing power optimization when leveraging large language models (LLMs) to refine RTL code for improved power, performance, and area (PPA). The authors propose POET, a novel framework that uniquely integrates differential testing with LLM-generated transformations to effectively suppress hallucinations and guarantee 100% functional correctness. Furthermore, POET introduces a power-prioritized non-dominated sorting evolutionary search mechanism that automatically converges toward low-power Pareto-optimal solutions without manual hyperparameter tuning. Evaluated on 40 designs from the RTL-OPT benchmark, POET consistently achieves the best power results across all cases while delivering competitive improvements in area and delay.

Technology Category

Application Category

📝 Abstract
Applying large language models (LLMs) to RTL code optimization for improved power, performance, and area (PPA) faces two key challenges: ensuring functional correctness of optimized designs despite LLM hallucination, and systematically prioritizing power reduction within the multi-objective PPA trade-off space. We propose POET (Power-Oriented Evolutionary Tuning), a framework that addresses both challenges. For functional correctness, POET introduces a differential-testing-based testbench generation pipeline that treats the original design as a functional oracle, using deterministic simulation to produce golden references and eliminating LLM hallucination from the verification process. For PPA optimization, POET employs an LLM-driven evolutionary mechanism with non-dominated sorting, power-first intra-level ranking, and proportional survivor selection to steer the search toward the low-power region of the Pareto front without manual weight tuning. Evaluated on the RTL-OPT benchmark across 40 diverse RTL designs, POET achieves 100% functional correctness, the best power on all 40 designs, and competitive area and delay improvements.
Problem

Research questions and friction points this paper is trying to address.

LLM hallucination
functional correctness
power optimization
PPA trade-off
RTL optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based RTL optimization
functional correctness verification
power-oriented evolutionary tuning
differential testing
Pareto-front exploration
🔎 Similar Papers
2023-12-14IEEE Transactions on Computer-Aided Design of Integrated Circuits and SystemsCitations: 57