Arbitrary Ratio Feature Compression via Next Token Prediction

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing feature compression methods typically require training dedicated models for specific compression ratios, lacking flexibility and generalization. This work proposes ARFC, a unified framework that, for the first time, enables a single model to support arbitrary compression ratios. By leveraging an autoregressive next-token prediction mechanism, ARFC allows flexible control of the compression ratio at inference time simply by adjusting the number of generated tokens. To enhance robustness, the framework incorporates a Mixture-of-Softmaxes (MoS) module and introduces an Entity-Relation Graph Constraint (ERGC) to preserve semantic and structural information. Experiments demonstrate that ARFC consistently outperforms existing methods across multiple tasks—including cross-modal retrieval, image classification, and image retrieval—under various compression ratios, and in some scenarios even surpasses the performance of the original uncompressed features.

Technology Category

Application Category

📝 Abstract
Feature compression is increasingly important for improving the efficiency of downstream tasks, especially in applications involving large-scale or multi-modal data. While existing methods typically rely on dedicated models for achieving specific compression ratios, they are often limited in flexibility and generalization. In particular, retraining is necessary when adapting to a new compression ratio. To address this limitation, we propose a novel and flexible Arbitrary Ratio Feature Compression (ARFC) framework, which supports any compression ratio with a single model, eliminating the need for multiple specialized models. At its core, the Arbitrary Ratio Compressor (ARC) is an auto-regressive model that performs compression via next-token prediction. This allows the compression ratio to be controlled at inference simply by adjusting the number of generated tokens. To enhance the quality of the compressed features, two key modules are introduced. The Mixture of Solutions (MoS) module refines the compressed tokens by utilizing multiple compression results (solutions), reducing uncertainty and improving robustness. The Entity Relation Graph Constraint (ERGC) is integrated into the training process to preserve semantic and structural relationships during compression. Extensive experiments on cross-modal retrieval, image classification, and image retrieval tasks across multiple datasets demonstrate that our method consistently outperforms existing approaches at various compression ratios. Notably, in some cases, it even surpasses the performance of the original, uncompressed features. These results validate the effectiveness and versatility of ARFC for practical, resource-constrained scenarios.
Problem

Research questions and friction points this paper is trying to address.

feature compression
compression ratio
model flexibility
generalization
retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Arbitrary Ratio Feature Compression
Next Token Prediction
Mixture of Solutions
Entity Relation Graph Constraint
Auto-regressive Compression
🔎 Similar Papers
No similar papers found.
Yufan Liu
Yufan Liu
Institute of Automation, Chinese Academy of Sciences
Image/video processingKnowledge DistillationSaliency detectionModel compressionVideo coding
D
Daoyuan Ren
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institution of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences; CAS Center for Excellence in Brain Science and Intelligence Technology
Zhipeng Zhang
Zhipeng Zhang
School of Artificial Intelligence, Shanghai Jiao Tong University
Computer Vision,Object Tracking and Segmentation
W
Wenyang Luo
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institution of Automation, Chinese Academy of Sciences; School of Artificial Intelligence, University of Chinese Academy of Sciences; CAS Center for Excellence in Brain Science and Intelligence Technology
Bing Li
Bing Li
Professor of National Laboratory of Pattern Recognition, Institute of Automation, Chinese
Video AnalysisColor ConstancyWeb MiningMultimedia
Weiming Hu
Weiming Hu
NLPR
Computer Vision
S
Stephen Maybank
School of Computer Science and Mathematics, Birkbeck College, University of London, London WC1E 7HX, U.K.