Global p-Values in Multi-Design Studies

📅 2025-07-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-design studies, results from diverse analytical strategies are difficult to integrate, leading to inaccurate global significance assessments and heightened risks of selective reporting. To address this, we propose a global *p*-value framework based on the *g*-value, which constructs a unified statistical model to aggregate outputs from multiple analytical strategies into a single, multiplicity-adjusted global test statistic. The method rigorously controls the family-wise error rate while maintaining high statistical power, thereby mitigating selective reporting bias. We establish its asymptotic validity theoretically and validate its performance through extensive simulations and empirical analyses. Results demonstrate substantial improvements in true positive detection, reduced false positive risk, and enhanced result reproducibility. The framework is broadly applicable across disciplines and provides a generalizable inferential paradigm for integrative analysis in multi-design research.

Technology Category

Application Category

📝 Abstract
Replicability issues -- referring to the difficulty or failure of independent researchers to corroborate the results of published studies -- have hindered the meaningful progression of science and eroded public trust in scientific findings. In response to the replicability crisis, one approach is the use of multi-design studies, which incorporate multiple analysis strategies to address a single research question. However, there remains a lack of methods for effectively combining outcomes in multi-design studies. In this paper, we propose a unified framework based on the g-value, for global p-value, which enables meaningful aggregation of outcomes from all the considered analysis strategies in multi-design studies. Our framework mitigates the risk of selective reporting while rigorously controlling type I error rates. At the same time, it maintains statistical power and reduces the likelihood of overlooking true positive effects. Importantly, our method is flexible and broadly applicable across various scientific domains and outcome results.
Problem

Research questions and friction points this paper is trying to address.

Lack of methods for combining multi-design study outcomes
Need to mitigate selective reporting and control type I error
Requirement for flexible, powerful statistical framework across domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework using g-value aggregation
Controls type I error and maintains power
Flexible across scientific domains
G
Guillaume Coqueret
Department of Quantitative Finance and Economics, EMLYON Business School, France
Yuming Zhang
Yuming Zhang
University of Kentucky
weldsensormodelingcontrolrobot
C
Christophe Pérignon
Department of Finance, HEC Paris, France
Francesca Chiaromonte
Francesca Chiaromonte
Professor of Statistics, Pennsylvania State University, Sant'Anna School of Advanced Studies
StatisticsGenomicsBioinformaticsMeteorologyEconomics
S
Stéphane Guerrier
Faculty of Science, University of Geneva, Switzerland