GreenFactory: Ensembling Zero-Cost Proxies to Estimate Performance of Neural Networks

📅 2025-05-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the computational inefficiency of performance evaluation in neural architecture search (NAS), this paper proposes an end-to-end zero-cost accuracy prediction framework. Unlike conventional zero-cost proxies—such as SynFlow, Jacobi, and GRASP—that yield only relative rankings, our approach formulates accuracy prediction as a compositional regression task. It integrates heterogeneous proxy features from multiple sources and fuses them via a random forest regressor. Evaluated on the NATS-Bench benchmark, the framework demonstrates strong cross-dataset generalization (CIFAR-10, CIFAR-100, ImageNet-16-120) and cross-search-space robustness (SSS and TSS). Experimental results show Kendall’s τ correlation coefficients of 0.907–0.945—substantially outperforming individual zero-cost proxies—while maintaining high accuracy, robustness to architectural variations, and plug-and-play compatibility.

Technology Category

Application Category

📝 Abstract
Determining the performance of a Deep Neural Network during Neural Architecture Search processes is essential for identifying optimal architectures and hyperparameters. Traditionally, this process requires training and evaluation of each network, which is time-consuming and resource-intensive. Zero-cost proxies estimate performance without training, serving as an alternative to traditional training. However, recent proxies often lack generalization across diverse scenarios and provide only relative rankings rather than predicted accuracies. To address these limitations, we propose GreenFactory, an ensemble of zero-cost proxies that leverages a random forest regressor to combine multiple predictors' strengths and directly predict model test accuracy. We evaluate GreenFactory on NATS-Bench, achieving robust results across multiple datasets. Specifically, GreenFactory achieves high Kendall correlations on NATS-Bench-SSS, indicating substantial agreement between its predicted scores and actual performance: 0.907 for CIFAR-10, 0.945 for CIFAR-100, and 0.920 for ImageNet-16-120. Similarly, on NATS-Bench-TSS, we achieve correlations of 0.921 for CIFAR-10, 0.929 for CIFAR-100, and 0.908 for ImageNet-16-120, showcasing its reliability in both search spaces.
Problem

Research questions and friction points this paper is trying to address.

Estimating neural network performance without training.
Improving generalization of zero-cost proxies across scenarios.
Predicting model test accuracy directly using ensemble methods.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble zero-cost proxies for performance estimation
Random forest regressor predicts test accuracy directly
High Kendall correlations across multiple datasets
🔎 Similar Papers
No similar papers found.
G
Gabriel Cortes
University of Coimbra, CISUC/LASI, DEI
N
Nuno Lourencco
University of Coimbra, CISUC/LASI, DEI
P
Paolo Romano
INESC-ID & Instituto Superior Técnico, Universidade de Lisboa
Penousal Machado
Penousal Machado
University of Coimbra
Evolutionary ComputationArtificial IntelligenceComputational Creativity