PACC: Protocol-Aware Cross-Layer Compression for Compact Network Traffic Representation

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Amid the widespread adoption of encryption and evolving network protocols, representing network traffic entails a fundamental trade-off between compactness and cross-layer semantic completeness. This work proposes PACC, a novel framework that explicitly models both intra- and inter-layer redundancies within the protocol stack by decomposing traffic into shared and private components. Through a joint optimization of multi-view representation learning, contrastive mutual information maximization, hierarchical reconstruction, and task-aware supervision, PACC learns compact representations that balance information fidelity with compression efficiency. Evaluated on encrypted application classification, IoT device identification, and intrusion detection tasks, PACC achieves up to a 12.9% improvement in accuracy and a 3.16× gain in inference efficiency, significantly outperforming baselines based on handcrafted features, raw packet bits, and large models.

Technology Category

Application Category

📝 Abstract
Network traffic classification is a core primitive for network security and management, yet it is increasingly challenged by pervasive encryption and evolving protocols. A central bottleneck is representation: hand-crafted flow statistics are efficient but often too lossy, raw-bit encodings can be accurate but are costly, and recent pre-trained embeddings provide transfer but frequently flatten the protocol stack and entangle signals across layers. We observe that real traffic contains substantial redundancy both across network layers and within each layer; existing paradigms do not explicitly identify and remove this redundancy, leading to wasted capacity, shortcut learning, and degraded generalization. To address this, we propose PACC, a redundancy-aware, layer-aware representation framework. PACC treats the protocol stack as multi-view inputs and learns compact layer-wise projections that remain faithful to each layer while explicitly factorizing representations into shared (cross-layer) and private (layer-specific) components. We operationalize these goals with a joint objective that preserves layer-specific information via reconstruction, captures shared structure via contrastive mutual-information learning, and maximizes task-relevant information via supervised losses, yielding compact latents suitable for efficient inference. Across datasets covering encrypted application classification, IoT device identification, and intrusion detection, PACC consistently outperforms feature-engineered and raw-bit baselines. On encrypted subsets, it achieves up to a 12.9% accuracy improvement over nPrint. PACC matches or surpasses strong foundation-model baselines. At the same time, it improves end-to-end efficiency by up to 3.16x.
Problem

Research questions and friction points this paper is trying to address.

network traffic classification
protocol-aware representation
cross-layer redundancy
encrypted traffic
compact representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-layer compression
protocol-aware representation
redundancy-aware learning
multi-view network traffic
compact latent representation
🔎 Similar Papers
No similar papers found.
Z
Zhaochen Guo
University of Electronic Science and Technology of China
T
Tianyufei Zhou
University of Hong Kong
H
Honghao Wang
Renmin University of China
R
Ronghua Li
Hong Kong Polytechnic University
Shinan Liu
Shinan Liu
Assistant Professor, University of Hong Kong
NetworkingSecurityMeasurementMachine Learning Systems