TrafficMoE: Heterogeneity-aware Mixture of Experts for Encrypted Traffic Classification

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Encrypted traffic lacks semantic payload information, making it difficult for conventional static homogeneous models to distinguish between protocol structure and encryption-induced noise, thereby degrading fine-grained classification performance. To address this challenge, this work proposes TrafficMoE, the first framework to introduce a heterogeneity-aware dynamic sparse mixture-of-experts mechanism into encrypted traffic analysis. It adopts a decouple-filter-aggregate (DFA) paradigm: a dual-branch heterogeneous architecture separately processes packet headers and payloads, an uncertainty-aware filtering module suppresses high-variance representations, and a routing-guided mechanism adaptively fuses multimodal features. Extensive experiments across six datasets demonstrate that TrafficMoE significantly outperforms state-of-the-art methods, confirming both the effectiveness and necessity of heterogeneous modeling for encrypted traffic analysis.
📝 Abstract
Encrypted traffic classification is a critical task for network security. While deep learning has advanced this field, the occlusion of payload semantics by encryption severely challenges standard modeling approaches. Most existing frameworks rely on static and homogeneous pipelines that apply uniform parameter sharing and static fusion strategies across all inputs. This one-size-fits-all static design is inherently flawed: by forcing structured headers and randomized payloads into a unified processing pipeline, it inevitably entangles the raw protocol signals with stochastic encryption noise, thereby degrading the fine-grained discriminative features. In this paper, we propose TrafficMoE, a framework that breaks through the bottleneck of static modeling by establishing a Disentangle-Filter-Aggregate (DFA) paradigm. Specifically, to resolve the structural between-components conflict, the architecture disentangles headers and payloads using dual-branch sparse Mixture-of-Experts (MoE), enabling modality-specific modeling. To mitigate the impact of stochastic noise, an uncertainty-aware filtering mechanism is introduced to quantify reliability and selectively suppress high-variance representations. Finally, to overcome the limitations of static fusion, a routing-guided strategy aggregates cross-modality features dynamically, that adaptively weighs contributions based on traffic context. With this DFA paradigm, TrafficMoE maximizes representational efficiency by focusing solely on the most discriminative traffic features. Extensive experiments on six datasets demonstrate TrafficMoE consistently outperforms state-of-the-art methods, validating the necessity of heterogeneity-aware modeling in encrypted traffic analysis. The source code is publicly available at https://github.com/Posuly/TrafficMoE_main.
Problem

Research questions and friction points this paper is trying to address.

encrypted traffic classification
heterogeneity
static modeling
payload semantics occlusion
feature entanglement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of Experts
encrypted traffic classification
heterogeneity-aware modeling
uncertainty-aware filtering
dynamic feature fusion
🔎 Similar Papers
No similar papers found.
Q
Qing He
School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
X
Xiaowei Fu
School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
Lei Zhang
Lei Zhang
Chongqing University
Computer VisionTrustworthy AIDomain GeneralizationTransfer LearningIntelligent Olfaction