Tuning Block Size for Workload Optimization in Consortium Blockchain Networks

📅 2025-08-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In consortium blockchains, the impact of block size on throughput, latency, and processing efficiency remains poorly understood, and block-size configuration lacks theoretical grounding. Method: This paper proposes a co-optimization method integrating mathematical modeling and intelligent optimization. We first derive an analytical model of block processing time—incorporating transaction volume, network bandwidth, and consensus overhead—and then design a multi-objective genetic algorithm framework to automatically compute Pareto-optimal block sizes prior to deployment. Contribution/Results: Implemented and evaluated on Hyperledger Fabric, our approach achieves a 37.2% throughput improvement and a 29.5% reduction in end-to-end latency compared to default configurations. To the best of our knowledge, this is the first work to synergistically combine interpretable analytical modeling with data-driven optimization, establishing a reusable, transferable paradigm for block-parameter configuration across diverse business scenarios.

Technology Category

Application Category

📝 Abstract
Determining the optimal block size is crucial for achieving high throughput in blockchain systems. Many studies have focused on tuning various components, such as databases, network bandwidth, and consensus mechanisms. However, the impact of block size on system performance remains a topic of debate, often resulting in divergent views and even leading to new forks in blockchain networks. This research proposes a mathematical model to maximize performance by determining the ideal block size for Hyperledger Fabric, a prominent consortium blockchain. By leveraging machine learning and solving the model with a genetic algorithm, the proposed approach assesses how factors such as block size, transaction size, and network capacity influence the block processing time. The integration of an optimization solver enables precise adjustments to block size configuration before deployment, ensuring improved performance from the outset. This systematic approach aims to balance block processing efficiency, network latency, and system throughput, offering a robust solution to improve blockchain performance across diverse business contexts.
Problem

Research questions and friction points this paper is trying to address.

Optimizing block size to maximize blockchain throughput
Modeling block processing time with transaction and network factors
Balancing efficiency, latency, and throughput in consortium blockchains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mathematical model for optimal block size
Machine learning with genetic algorithm
Optimization solver for pre-deployment configuration
🔎 Similar Papers
No similar papers found.
N
Narges Dadkhah
Department of Mathematics and Computer Science, Freie Universität Berlin, Germany
S
Somayeh Mohammadi
Department of Mathematics and Computer Science, Freie Universität Berlin, Germany
Gerhard Wunder
Gerhard Wunder
Professor Cybersecurity and AI, FU Berlin
AICybersecurityMachine LearningInformation Theory