🤖 AI Summary
Orthogonal arrays (OAs), a classical tool in experimental design, lack a systematic synthesis of their theoretical foundations and cross-disciplinary applications. This paper provides the first comprehensive survey of OA fundamentals, statistical properties, and recent advances. Methodologically, it unifies combinatorial mathematics, finite field theory, coding theory, and error-correcting code analysis to uncover deep connections between OAs and geometric structures as well as algebraic codes. The work identifies and integrates cutting-edge applications in AI—including black-box optimization and hyperparameter tuning—high-dimensional numerical integration, machine learning sampling, and interpretable visualization. Crucially, it establishes the first interdisciplinary application framework that repositions OAs beyond traditional statistical design toward an AI-driven, standardized sampling paradigm. This shift markedly enhances both computational efficiency and interpretability in high-dimensional problem solving. (149 words)
📝 Abstract
Orthogonal arrays are arguably one of the most fascinating and important statistical tools for efficient data collection. They have a simple, natural definition, desirable properties when used as fractional factorials, and a rich and beautiful mathematical theory. Their connections with combinatorics, finite fields, geometry, and error-correcting codes are profound. Orthogonal arrays have been widely used in agriculture, engineering, manufacturing, and high-technology industries for quality and productivity improvement experiments. In recent years, they have drawn rapidly growing interest from various fields such as computer experiments, integration, visualization, optimization, big data, machine learning/artificial intelligence through successful applications in those fields. We review the fundamental concepts and statistical properties and report recent developments. Discussions of recent applications and connections with various fields are presented.