AutoPower: Automated Few-Shot Architecture-Level Power Modeling by Power Group Decoupling

📅 2025-08-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Architecture-level power modeling faces challenges of low accuracy in analytical models and high dependency on labeled configurations in data-driven approaches. Method: This paper proposes an automated few-shot modeling paradigm. Its core innovation is the first introduction of a power-group decoupling mechanism, decomposing total power into independently modelable subsystems (e.g., clock, SRAM), each equipped with a lightweight hierarchical submodel—significantly reducing data requirements. The method integrates architectural feature analysis with few-shot machine learning, requiring only two known configurations for training. Results: Experiments on mainstream processors achieve a mean absolute error of 4.36% and an R² of 0.96—outperforming McPAT-Calib by 5% lower error and 0.09 higher R²—demonstrating both high accuracy and strong generalization across diverse microarchitectural configurations.

Technology Category

Application Category

📝 Abstract
Power efficiency is a critical design objective in modern CPU design. Architects need a fast yet accurate architecture-level power evaluation tool to perform early-stage power estimation. However, traditional analytical architecture-level power models are inaccurate. The recently proposed machine learning (ML)-based architecture-level power model requires sufficient data from known configurations for training, making it unrealistic. In this work, we propose AutoPower targeting fully automated architecture-level power modeling with limited known design configurations. We have two key observations: (1) The clock and SRAM dominate the power consumption of the processor, and (2) The clock and SRAM power correlate with structural information available at the architecture level. Based on these two observations, we propose the power group decoupling in AutoPower. First, AutoPower decouples across power groups to build individual power models for each group. Second, AutoPower designs power models by further decoupling the model into multiple sub-models within each power group. In our experiments, AutoPower can achieve a low mean absolute percentage error (MAPE) of 4.36% and a high $R^2$ of 0.96 even with only two known configurations for training. This is 5% lower in MAPE and 0.09 higher in $R^2$ compared with McPAT-Calib, the representative ML-based power model.
Problem

Research questions and friction points this paper is trying to address.

Automated power modeling with few known configurations
Improving accuracy in architecture-level power estimation
Decoupling power groups for precise individual modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated few-shot power modeling
Power group decoupling technique
Accurate low-configuration training model
🔎 Similar Papers
No similar papers found.