Mean Masked Autoencoder with Flow-Mixing for Encrypted Traffic Classification

๐Ÿ“… 2026-03-31
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limitations of existing masked autoencoder-based approaches for encrypted traffic classification, which are confined to byte-level reconstruction within individual flows and struggle to capture multi-granular contextual semantics. To overcome this, the authors propose Mean MAE (MMAE), a novel framework that introduces flow-level semantic supervision via a teacherโ€“student self-distillation mechanism and incorporates a dynamic FlowMix strategy to enable cross-flow mixing, thereby transcending the information bottleneck of single-flow analysis. Additionally, MMAE employs a packet importance-aware masking predictor that leverages packet-level side-channel statistics and attention bias to dynamically select high-semantic-density tokens for masked reconstruction. Evaluated across multiple datasets encompassing encrypted applications, malware, and attack traffic, MMAE consistently outperforms state-of-the-art methods, achieving new best performance.
๐Ÿ“ Abstract
Network traffic classification using self-supervised pre-training models based on Masked Autoencoders (MAE) has demonstrated a huge potential. However, existing methods are confined to isolated byte-level reconstruction of individual flows, lacking adequate perception of the multi-granularity contextual relationship in traffic. To address this limitation, we propose Mean MAE (MMAE), a teacher-student MAE paradigm with flow mixing strategy for building encrypted traffic pre-training model. MMAE employs a self-distillation mechanism for teacher-student interaction, where the teacher provides unmasked flow-level semantic supervision to advance the student from local byte reconstruction to multi-granularity comprehension. To break the information bottleneck in individual flows, we introduce a dynamic Flow Mixing (FlowMix) strategy to replace traditional random masking mechanism. By constructing challenging cross-flow mixed samples with interferences, it compels the model to learn discriminative representations from distorted tokens. Furthermore, we design a Packet-importance aware Mask Predictor (PMP) equipped with an attention bias mechanism that leverages packet-level side-channel statistics to dynamically mask tokens with high semantic density. Numerous experiments on a number of datasets covering encrypted applications, malware, and attack traffic demonstrate that MMAE achieves state-of-the-art performance. The code is available at https://github.com/lx6c78/MMAE
Problem

Research questions and friction points this paper is trying to address.

encrypted traffic classification
masked autoencoder
multi-granularity context
self-supervised pre-training
flow-level semantics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Masked Autoencoder
Flow Mixing
Self-distillation
Encrypted Traffic Classification
Multi-granularity Representation
๐Ÿ”Ž Similar Papers
No similar papers found.