PrefixGPT: Prefix Adder Optimization by a Generative Pre-trained Transformer

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prefix adder design faces challenges stemming from stringent design rules and an exponentially growing search space. This paper introduces, for the first time, Generative Pre-trained Transformers (GPT) into hardware structure generation, proposing an end-to-end sequential topological modeling framework: circuit topology is encoded via coordinate-based representations, structural validity is enforced through legality masking, and design-rule compliance is autonomously learned via a two-stage pretraining–fine-tuning strategy. Crucially, the method generates high-performance prefix adders directly—without relying on handcrafted features or heuristic rules. Experimental results demonstrate that the best generated designs achieve a 7.7% reduction in area–delay product and an average 79.1% improvement in performance over baselines. These findings validate the feasibility and superiority of large language model–driven automated hardware architecture optimization.

Technology Category

Application Category

📝 Abstract
Prefix adders are widely used in compute-intensive applications for their high speed. However, designing optimized prefix adders is challenging due to strict design rules and an exponentially large design space. We introduce PrefixGPT, a generative pre-trained Transformer (GPT) that directly generates optimized prefix adders from scratch. Our approach represents an adder's topology as a two-dimensional coordinate sequence and applies a legality mask during generation, ensuring every design is valid by construction. PrefixGPT features a customized decoder-only Transformer architecture. The model is first pre-trained on a corpus of randomly synthesized valid prefix adders to learn design rules and then fine-tuned to navigate the design space for optimized design quality. Compared with existing works, PrefixGPT not only finds a new optimal design with a 7.7% improved area-delay product (ADP) but exhibits superior exploration quality, lowering the average ADP by up to 79.1%. This demonstrates the potential of GPT-style models to first master complex hardware design principles and then apply them for more efficient design optimization.
Problem

Research questions and friction points this paper is trying to address.

Optimizing prefix adder designs in large search space
Ensuring valid hardware topology through legality constraints
Improving area-delay product using generative transformer models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative pre-trained Transformer generates optimized prefix adders
Customized decoder-only architecture with legality mask ensures validity
Pre-training and fine-tuning enable superior area-delay product optimization
🔎 Similar Papers
No similar papers found.