🤖 AI Summary
Existing adder designs lack co-optimization across netlist and standard-cell levels, making it difficult to meet specific power, performance, and area (PPA) targets. This work proposes AXON, a framework that introduces the first automated, adder-specific netlist optimization methodology. AXON employs hierarchical design space exploration, integrating architectural-level prefix topology search with standard-cell-aware mapping. It further innovatively combines parallel-prefix and Ling structures to construct hybrid ultra-high-speed adders that effectively shorten the critical path. Evaluated in TSMC 28nm technology, AXON achieves up to 10.3% lower delay, 12.6% improvement in area-delay product, and 32.1% reduction in energy-delay product compared to commercial synthesis tools.
📝 Abstract
Adders are fundamental building blocks in modern digital systems, and their performance, power, and area (PPA) directly impact system efficiency. Contemporary adders typically use parallel-prefix architectures with established PPA trade-offs, but these often fail to deliver globally optimal PPA for specific design goals. Prior work lacks netlist-/cell-level awareness, and general synthesis heuristics are not adder-specific, resulting in suboptimal PPA. To address this, we propose AXON, an automated netlist optimization framework for adders. It performs design space exploration from architectural to netlist level, integrating prefix topology search with standard-cell-aware mapping via a hierarchical approach to quickly converge to near-optimal PPA solutions. We also introduce a hybrid ultra-high-speed adder combining parallel-prefix and Ling architectures to shorten the critical path. Experiments on TSMC 28nm library show AXON improves delay, area-delay product, and energy-delay product by up to 10.3%, 12.6%, and 32.1% respectively, compared to commercial synthesis tools.