🤖 AI Summary
To address high latency and deployment costs induced by node placement in edge–fog collaborative computing, this paper proposes a genetic algorithm framework jointly optimizing latency and cost. It introduces a tunable fitness function and a hybrid encoding scheme to co-optimize the spatial placement and resource allocation of both edge and fog nodes. The framework jointly models and solves communication latency and infrastructure deployment cost within large-scale IoT simulation scenarios. Experimental results demonstrate that, compared to baseline approaches, the proposed method reduces end-to-end latency by 2.77% on average and cuts system deployment costs by 31.15%. It significantly enhances deployment efficiency and flexibility, offering a scalable and configurable intelligent deployment paradigm for heterogeneous edge–fog architectures.
📝 Abstract
Reducing latency in the Internet of Things (IoT) is a critical concern. While cloud computing facilitates communication, it falls short of meeting real-time requirements reliably. Edge and fog computing have emerged as viable solutions by positioning computing nodes closer to end users, offering lower latency and increased processing power. An edge-fog framework comprises various components, including edge and fog nodes, whose strategic placement is crucial as it directly impacts latency and system cost. This paper presents an effective and tunable node placement strategy based on a genetic algorithm to address the optimization problem of deploying edge and fog nodes. The main objective is to minimize latency and cost through optimal node placement. Simulation results demonstrate that the proposed framework achieves up to 2.77% latency and 31.15% cost reduction.