🤖 AI Summary
This work addresses the limitation of existing neural architecture search (NAS) methods for tabular data, which rely on proxy metrics (e.g., FLOPs) rather than actual hardware energy consumption for energy efficiency optimization. We propose the first end-to-end NAS framework that directly optimizes measured hardware-level energy consumption. Our core innovations include: (1) integrating fine-grained kernel-level energy modeling into NAS for the first time; (2) designing a tabular-data-specific search space; and (3) developing an energy-aware differentiable optimization pipeline with a tailored gradient estimation mechanism. Evaluated on multiple standard tabular benchmarks, architectures discovered by our method achieve up to 92% reduction in measured runtime energy consumption compared to conventional NAS baselines, while preserving ≥98% of the original predictive accuracy—demonstrating joint Pareto-optimal trade-offs between accuracy and energy efficiency.
📝 Abstract
Many studies estimate energy consumption using proxy metrics like memory usage, FLOPs, and inference latency, with the assumption that reducing these metrics will also lower energy consumption in neural networks. This paper, however, takes a different approach by introducing an energy-efficient Neural Architecture Search (NAS) method that directly focuses on identifying architectures that minimize energy consumption while maintaining acceptable accuracy. Unlike previous methods that primarily target vision and language tasks, the approach proposed here specifically addresses tabular datasets. Remarkably, the optimal architecture suggested by this method can reduce energy consumption by up to 92% compared to architectures recommended by conventional NAS.