🤖 AI Summary
Deploying convolutional neural networks (CNNs) on programmable data planes (PDPs) is hindered by severe hardware constraints—limited on-chip memory and insufficient computational resources—particularly on Intel Tofino switches.
Method: This work presents the first end-to-end CNN inference implementation on Intel Tofino, introducing a hardware-aware joint optimization framework: structured pruning to reduce model size; INT8 quantization to minimize memory footprint and arithmetic cost; and pipelined, block-wise scheduling tailored to Tofino’s parallel architecture, all realized via P4 programming and hardware-software co-mapping.
Contribution/Results: Evaluated on network anomaly detection, the system achieves 97.3% accuracy while consuming only 22.7% of on-chip SRAM, with an average latency of 42.66 μs and full line-rate processing at 100 Gbps. This constitutes the first high-accuracy, low-overhead, and practically deployable real-time CNN inference solution for intelligent data planes (IDPs).
📝 Abstract
The rapid development of programmable network devices and the widespread use of machine learning (ML) in networking have facilitated efficient research into intelligent data plane (IDP). Offloading ML to programmable data plane (PDP) enables quick analysis and responses to network traffic dynamics, and efficient management of network links. However, PDP hardware pipeline has significant resource limitations. For instance, Intel Tofino ASIC has only 10Mb SRAM in each stage, and lacks support for multiplication, division and floating-point operations. These constraints significantly hinder the development of IDP. This paper presents quark, a framework that fully offloads convolutional neural network (CNN) inference onto PDP. quark employs model pruning to simplify the CNN model, and uses quantization to support floating-point operations. Additionally, quark divides the CNN into smaller units to improve resource utilization on the PDP. We have implemented a testbed prototype of quark on both P4 hardware switch (Intel Tofino ASIC) and software switch (i.e., BMv2). Extensive evaluation results demonstrate that quark achieves 97.3% accuracy in anomaly detection task while using only 22.7% of the SRAM resources on the Intel Tofino ASIC switch, completing inference tasks at line rate with an average latency of 42.66$mu s$.