Kernel-Level Energy-Efficient Neural Architecture Search for Tabular Dataset

📅 2025-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing neural architecture search (NAS) methods for tabular data, which rely on proxy metrics (e.g., FLOPs) rather than actual hardware energy consumption for energy efficiency optimization. We propose the first end-to-end NAS framework that directly optimizes measured hardware-level energy consumption. Our core innovations include: (1) integrating fine-grained kernel-level energy modeling into NAS for the first time; (2) designing a tabular-data-specific search space; and (3) developing an energy-aware differentiable optimization pipeline with a tailored gradient estimation mechanism. Evaluated on multiple standard tabular benchmarks, architectures discovered by our method achieve up to 92% reduction in measured runtime energy consumption compared to conventional NAS baselines, while preserving ≥98% of the original predictive accuracy—demonstrating joint Pareto-optimal trade-offs between accuracy and energy efficiency.

Technology Category

Application Category

📝 Abstract
Many studies estimate energy consumption using proxy metrics like memory usage, FLOPs, and inference latency, with the assumption that reducing these metrics will also lower energy consumption in neural networks. This paper, however, takes a different approach by introducing an energy-efficient Neural Architecture Search (NAS) method that directly focuses on identifying architectures that minimize energy consumption while maintaining acceptable accuracy. Unlike previous methods that primarily target vision and language tasks, the approach proposed here specifically addresses tabular datasets. Remarkably, the optimal architecture suggested by this method can reduce energy consumption by up to 92% compared to architectures recommended by conventional NAS.
Problem

Research questions and friction points this paper is trying to address.

Directly minimize energy consumption in neural networks
Focus on tabular datasets, not vision or language tasks
Achieve up to 92% energy reduction compared to conventional NAS
Innovation

Methods, ideas, or system contributions that make the work stand out.

Directly minimizes energy consumption in neural networks
Focuses specifically on tabular datasets
Reduces energy by up to 92% compared to conventional NAS
🔎 Similar Papers
No similar papers found.