🤖 AI Summary
In methodological comparative studies, algorithmic failures—such as non-convergence or absence of output—preclude performance evaluation, yet existing literature lacks standardized guidelines for handling such failures, often overlooking or misapplying failure mitigation strategies. Method: We systematically analyze failure causes and risks of improper handling, critically examine prevalent censoring and imputation strategies for their statistical biases, and propose the principle of “context-adapted failure fallback,” establishing a framework grounded in empirically feasible fallback mechanisms. Through statistical modeling, failure root-cause diagnosis, and cross-domain empirical analysis, we identify widespread deficiencies in published studies’ failure handling practices. Contribution/Results: Two representative case studies demonstrate that inappropriate failure handling significantly distorts method rankings and undermines conclusion validity. Our work bridges critical theoretical and practical gaps in the principled treatment of algorithmic failures in empirical methodology research.
📝 Abstract
Comparison studies in methodological research are intended to compare methods in an evidence-based manner, offering guidance to data analysts to select a suitable method for their application. To provide trustworthy evidence, they must be carefully designed, implemented, and reported, especially given the many decisions made in planning and running. A common challenge in comparison studies is to handle the ``failure'' of one or more methods to produce a result for some (real or simulated) data sets, such that their performances cannot be measured in those instances. Despite an increasing emphasis on this topic in recent literature (focusing on non-convergence as a common manifestation), there is little guidance on proper handling and interpretation, and reporting of the chosen approach is often neglected. This paper aims to fill this gap and provides practical guidance for handling method failure in comparison studies. In particular, we show that the popular approaches of discarding data sets yielding failure (either for all or the failing methods only) and imputing are inappropriate in most cases. We also discuss how method failure in published comparison studies -- in various contexts from classical statistics and predictive modeling -- may manifest differently, but is often caused by a complex interplay of several aspects. Building on this, we provide recommendations derived from realistic considerations on suitable fallbacks when encountering method failure, hence avoiding the need for discarding data sets or imputation. Finally, we illustrate our recommendations and the dangers of inadequate handling of method failure through two illustrative comparison studies.