🤖 AI Summary
Existing distributed coordination services lack standardized testing methodologies and tools, resulting in incomplete evaluations and non-comparable results. Method: We conduct a systematic survey of evaluation practices across mainstream coordination services, identify critical gaps in benchmarking consistency, fault tolerance, and scalability, and distill six core evaluation requirements. Leveraging literature analysis and cross-tool comparison, we identify 12 key evaluation parameters and categorize five typical defects. Contribution/Results: We propose a standardized evaluation framework incorporating consistency models, fault injection, and distributed topology configurations. Furthermore, we introduce a reproducible, comparable, and scenario-driven benchmarking guideline—specifically designed for coordination services—that fills a critical gap in the domain’s dedicated evaluation ecosystem.
📝 Abstract
Coordination services and protocols are critical components of distributed systems and are essential for providing consistency, fault tolerance, and scalability. However, due to the lack of a standard benchmarking tool for distributed coordination services, coordination service developers/researchers either use a NoSQL standard benchmark and omit evaluating consistency, distribution, and fault tolerance; or create their own ad-hoc microbenchmarks and skip comparability with other services. In this study, we analyze and compare known and widely used distributed coordination services, their evaluations, and the tools used to benchmark those systems. We identify important requirements of distributed coordination service benchmarking, like the metrics and parameters that need to be evaluated and their evaluation setups and tools.