🤖 AI Summary
To address the inherent trade-off between accuracy and efficiency in traffic classification on programmable data planes, this paper proposes a lightweight temporal pattern matching framework based on Key Segments. We introduce a novel “offline deep mining–online hardware matching” paradigm, which for the first time compresses packet-level temporal features into hardware-acceleratable key segments. Our approach integrates P4 compiler optimizations, compact hash tables, and SRAM-aware encoding to enable real-time line-rate processing. Experimental evaluation demonstrates that our method achieves a 26.4% improvement in F1-score over statistical baselines and an 18.3% gain over online deep learning approaches, while reducing latency by 13.0% and SRAM footprint by 79.2%. The framework supports full line-rate classification at 100 Gbps on commodity programmable switches.
📝 Abstract
Traffic classification on programmable data plane holds great promise for line-rate processing, with methods evolving from per-packet to flow-level analysis for higher accuracy. However, a trade-off between accuracy and efficiency persists. Statistical feature-based methods align with hardware constraints but often exhibit limited accuracy, while online deep learning methods using packet sequential features achieve superior accuracy but require substantial computational resources. This paper presents Synecdoche, the first traffic classification framework that successfully deploys packet sequential features on a programmable data plane via pattern matching, achieving both high accuracy and efficiency. Our key insight is that discriminative information concentrates in short sub-sequences--termed Key Segments--that serve as compact traffic features for efficient data plane matching. Synecdoche employs an "offline discovery, online matching" paradigm: deep learning models automatically discover Key Segment patterns offline, which are then compiled into optimized table entries for direct data plane matching. Extensive experiments demonstrate Synecdoche's superior accuracy, improving F1-scores by up to 26.4% against statistical methods and 18.3% against online deep learning methods, while reducing latency by 13.0% and achieving 79.2% reduction in SRAM usage. The source code of Synecdoche is publicly available to facilitate reproducibility and further research.