🤖 AI Summary
This study addresses the lack of systematic tools for evaluating performance differences among database management systems under multi-level tuning configurations. The authors present a scalable experimental framework that, for the first time, enables cross-system performance comparison of four mainstream open-source databases under a unified environment and diverse query and update workloads. Automated scripts handle deployment, data generation, and benchmarking, while multidimensional workload and parameter combinations are analyzed to quantitatively reveal how tuning strategies vary in effectiveness across systems. The findings not only recommend optimal configurations for specific workloads but also provide practical guidance for database selection and tuning, establishing an extensible evaluation benchmark for future research.
📝 Abstract
DBTuneSuite is a suite of experiments on four widely deployed free database systems to test their performance under various query/upsert loads and under various tuning options. The suite provides: (i) scripts to generate data and to install and run tests, making it expandable to other tests and systems; (ii) suggestions of which systems work best for which query types; and (iii) quantitative evidence that tuning options widely used in practice can behave very differently across systems. This paper is most useful for database system engineers, advanced database users and troubleshooters, and students.