Best Practices For Empirical Meta-Algorithmic Research Guidelines from the COSEAL Research Network

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Empirical studies of meta-algorithms—such as algorithm selection, configuration, and scheduling—suffer from poor reproducibility and high bias risk due to excessive degrees of freedom in experimental design and fragmented community practices. Method: This paper introduces the first systematic integration of long-standing best practices from the COSEAL community across subfields, yielding a unified, dynamically evolving methodology framework spanning the entire experimental lifecycle: problem formulation → experimental design → execution → analysis → result presentation. Grounded in empirical methodology, rigorous experimental design, statistical standards, and principles of scientific communication, it emphasizes controlled variable management, benchmark standardization, and result transparency. Contribution/Results: The framework significantly reduces experimental bias, enhances cross-study comparability, and strengthens scientific rigor. It has been adopted for onboarding new researchers and informing journal review criteria.

Technology Category

Application Category

📝 Abstract
Empirical research on meta-algorithmics, such as algorithm selection, configuration, and scheduling, often relies on extensive and thus computationally expensive experiments. With the large degree of freedom we have over our experimental setup and design comes a plethora of possible error sources that threaten the scalability and validity of our scientific insights. Best practices for meta-algorithmic research exist, but they are scattered between different publications and fields, and continue to evolve separately from each other. In this report, we collect good practices for empirical meta-algorithmic research across the subfields of the COSEAL community, encompassing the entire experimental cycle: from formulating research questions and selecting an experimental design, to executing ex- periments, and ultimately, analyzing and presenting results impartially. It establishes the current state-of-the-art practices within meta-algorithmic research and serves as a guideline to both new researchers and practitioners in meta-algorithmic fields.
Problem

Research questions and friction points this paper is trying to address.

Addresses scalability and validity issues in meta-algorithmic experiments
Consolidates scattered best practices across subfields into unified guidelines
Provides comprehensive guidance for the entire experimental research cycle
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collects best practices across meta-algorithmic subfields
Encompasses entire experimental cycle from design to analysis
Establishes guidelines for scalable and valid research
🔎 Similar Papers
No similar papers found.
Theresa Eimer
Theresa Eimer
RL Team Lead, Leibniz Universität Hannover
Reinforcement LearningAutoRLGeneralization
L
Lennart Schäpermeier
University of Münster
André Biedenkapp
André Biedenkapp
RL Subgroup Lead, University of Freiburg
Dynamic Algorithm ConfigurationLearning to LearnReinforcement LearningAutoMLAutoRL
Alexander Tornede
Alexander Tornede
Leibniz University Hannover
Lars Kotthoff
Lars Kotthoff
University of St Andrews
combinatorial optimizationapplied machine learningalgorithm selectionautomated machine
Pieter Leyman
Pieter Leyman
Ghent University
Combinatorial Optimization(Meta)heuristicsExplainabilitySustainability
Matthias Feurer
Matthias Feurer
TU Dortmund University, Lamarr Institute
Hyperparameter optimizationBayesian optimizationOpenMLAutoMLBenchmarking
Katharina Eggensperger
Katharina Eggensperger
Professor for ML and AI | Lamarr Institute, TU Dortmund University
AutoMLHyperparameter OptimizationBayesian OptimizationMeta-LearningTabular Data
K
Kaitlin Maile
Google
T
Tanja Tornede
Leibniz University Hannover
Anna Kozak
Anna Kozak
Warsaw University of Technology
Ke Xue
Ke Xue
Nanjing University
Black-Box OptimizationMachine Learning
Marcel Wever
Marcel Wever
LUHAI, Leibniz University Hannover
AutoMLMulti-Label ClassificationHyperparameter OptimizationAlgorithm SelectionAI
Mitra Baratchi
Mitra Baratchi
Associate professor, Leiden University
Mobility data miningSpatio-temporal data miningTime-series data miningUbiquitous computing
D
Damir Pulatov
University of North Carolina Wilmington
H
Heike Trautmann
Paderborn University
H
Haniye Kashgarani
Purdue University
Marius Lindauer
Marius Lindauer
Leibniz University Hannover (Germany), Institute of Artificial Intelligence LUH|AI, L3S Research
Machine LearningAutoMLReinforcement LearningInterpretable Machine LearningArtificial Intelligence