MOOSComp: Improving Lightweight Long-Context Compressor via Mitigating Over-Smoothing and Incorporating Outlier Scores

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high inference overhead and weak discriminative capability of lightweight compressors in long-context reasoning under resource-constrained settings, this paper proposes a lightweight token-classification-based context compression method. We innovatively introduce an inter-class cosine similarity loss to mitigate representation over-smoothing and design a dynamic outlier scoring mechanism to preserve critical rare tokens, enabling joint optimization across both training and inference stages. Built upon the BERT architecture, our method supports end-to-end differentiable training. Extensive evaluations on multiple long-context understanding and reasoning benchmarks demonstrate significant improvements over state-of-the-art approaches: at a 4× compression ratio, it achieves a 3.3× inference speedup on mobile devices and a 12.7% gain in classification accuracy, while maintaining robust compression performance and strong task generalization ability.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models have significantly improved their ability to process long-context input, but practical applications are challenged by increased inference time and resource consumption, particularly in resource-constrained environments. To address these challenges, we propose MOOSComp, a token-classification-based long-context compression method that enhances the performance of a BERT-based compressor by mitigating the over-smoothing problem and incorporating outlier scores. In the training phase, we add an inter-class cosine similarity loss term to penalize excessively similar token representations, thereby improving the token classification accuracy. During the compression phase, we introduce outlier scores to preserve rare but critical tokens that are prone to be discarded in task-agnostic compression. These scores are integrated with the classifier's output, making the compressor more generalizable to various tasks. Superior performance is achieved at various compression ratios on long-context understanding and reasoning benchmarks. Moreover, our method obtains a speedup of 3.3x at a 4x compression ratio on a resource-constrained mobile device.
Problem

Research questions and friction points this paper is trying to address.

Mitigating over-smoothing in BERT-based compressors
Incorporating outlier scores to preserve critical tokens
Improving long-context compression efficiency and speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token-classification-based long-context compression
Inter-class cosine similarity loss term
Outlier scores for critical token preservation
Fengwei Zhou
Fengwei Zhou
OPPO CTG
J
Jiafei Song
OPPO CTG
W
Wenjin Jason Li
OPPO CTG
G
Gengjian Xue
OPPO CTG
Z
Zhikang Zhao
OPPO CTG
Yichao Lu
Yichao Lu
Layer 6 AI
Recommender SystemsNatural Language ProcessingComputer Vision
B
Bailin Na
OPPO CTG