From Black-Box Tuning to Guided Optimization via Hyperparameters Interaction Analysis

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Hyperparameter tuning suffers from high computational cost and lacks interpretable guidance regarding parameter importance ranking, pairwise interactions, and critical value ranges. To address this, we propose MetaSHAP—the first framework integrating SHAP value analysis with meta-learning. Leveraging over 9 million historical machine learning pipelines, MetaSHAP models the directional impact, pairwise interactions, and sensitivity intervals of hyperparameters, generating dataset- and algorithm-specific, interpretable tuning recommendations. Our method combines surrogate model construction, Bayesian optimization guidance, and large-scale benchmarking across 164 classification datasets and 14 classifiers. Experiments demonstrate that MetaSHAP yields reliable hyperparameter importance estimates and guides Bayesian optimization to state-of-the-art performance, significantly overcoming the limitations of conventional black-box tuning approaches.

Technology Category

Application Category

📝 Abstract
Hyperparameters tuning is a fundamental, yet computationally expensive, step in optimizing machine learning models. Beyond optimization, understanding the relative importance and interaction of hyperparameters is critical to efficient model development. In this paper, we introduce MetaSHAP, a scalable semi-automated eXplainable AI (XAI) method, that uses meta-learning and Shapley values analysis to provide actionable and dataset-aware tuning insights. MetaSHAP operates over a vast benchmark of over 09 millions evaluated machine learning pipelines, allowing it to produce interpretable importance scores and actionable tuning insights that reveal how much each hyperparameter matters, how it interacts with others and in which value ranges its influence is concentrated. For a given algorithm and dataset, MetaSHAP learns a surrogate performance model from historical configurations, computes hyperparameters interactions using SHAP-based analysis, and derives interpretable tuning ranges from the most influential hyperparameters. This allows practitioners not only to prioritize which hyperparameters to tune, but also to understand their directionality and interactions. We empirically validate MetaSHAP on a diverse benchmark of 164 classification datasets and 14 classifiers, demonstrating that it produces reliable importance rankings and competitive performance when used to guide Bayesian optimization.
Problem

Research questions and friction points this paper is trying to address.

Identifies hyperparameter importance and interactions for tuning
Provides interpretable tuning insights using meta-learning and SHapley values
Guides efficient hyperparameter optimization to reduce computational costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

MetaSHAP uses meta-learning and Shapley values for hyperparameter analysis
It learns from millions of evaluated pipelines to provide interpretable tuning insights
The method guides Bayesian optimization with dataset-aware importance rankings
🔎 Similar Papers
No similar papers found.
Moncef Garouani
Moncef Garouani
Université Toulouse Capitole - IRIT
AutoMLExplainable AIMeta-LearningMachine LearningDeep Learning
A
Ayah Barhrhouj
LIS, UMR 7020 CNRS, Aix-Marseille University, Marseille, France