DT-UFC: Universal Large Model Feature Coding via Peaky-to-Balanced Distribution Transformation

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing distributed large-model deployment approaches lack generality in cross-model feature encoding, as task- or model-specific methods fail to handle distributional skew and incompatibility among heterogeneous features (e.g., LLaMA3, DINOv2, SD3). Method: We propose the first universal large-model feature encoding framework, centered on a learnable “peak-to-uniform” distribution transformation mechanism. This data-driven, non-uniform, plug-and-play neural network maps heterogeneous features into a unified, balanced distribution space—without modifying downstream encoders/decoders. Integrated with universal quantization and entropy coding, it is jointly validated across multiple models and tasks. Contribution/Results: Experiments show that our method reduces average bit-rate by 32% and improves cross-model reconstruction PSNR by 4.7 dB over task-specific baselines, significantly enhancing generalizability and compression efficiency.

Technology Category

Application Category

📝 Abstract
Like image coding in visual data transmission, feature coding is essential for the distributed deployment of large models by significantly reducing transmission and storage overhead. However, prior studies have mostly targeted task- or model-specific scenarios, leaving the challenge of universal feature coding across diverse large models largely unaddressed. In this paper, we present the first systematic study on universal feature coding for large models. The key challenge lies in the inherently diverse and distributionally incompatible nature of features extracted from different models. For example, features from DINOv2 exhibit highly peaky, concentrated distributions, while those from Stable Diffusion 3 (SD3) are more dispersed and uniform. This distributional heterogeneity severely hampers both compression efficiency and cross-model generalization. To address this, we propose a learned peaky-to-balanced distribution transformation, which reshapes highly skewed feature distributions into a common, balanced target space. This transformation is non-uniform, data-driven, and plug-and-play, enabling effective alignment of heterogeneous distributions without modifying downstream codecs. With this alignment, a universal codec trained on the balanced target distribution can effectively generalize to features from different models and tasks. We validate our approach on three representative large models-LLaMA3, DINOv2, and SD3-across multiple tasks and modalities. Extensive experiments show that our method achieves notable improvements in both compression efficiency and cross-model generalization over task-specific baselines. All source code will be released for future research.
Problem

Research questions and friction points this paper is trying to address.

Universal feature coding for diverse large models
Addressing distributional incompatibility in model features
Improving compression and cross-model generalization efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learned peaky-to-balanced distribution transformation
Non-uniform data-driven plug-and-play alignment
Universal codec for diverse large models
🔎 Similar Papers
No similar papers found.