Compressed Feature Quality Assessment: Dataset and Baselines

πŸ“… 2025-06-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper introduces the novel problem of Compressed Feature Quality Assessment (CFQA), aiming to quantify semantic distortion induced by compressing intermediate features of large modelsβ€”a limitation unaddressed by conventional metrics (e.g., MSE, cosine similarity) that fail to capture semantic degradation. To this end, we establish the first CFQA benchmark: comprising 300 original images and 12,000 compressed features, spanning three visual tasks and four mainstream codecs, with task performance drop serving as ground-truth semantic distortion labels. Leveraging a standardized multi-task feature extraction and compression experimental framework, we systematically evaluate existing metrics and demonstrate their insufficient representational capacity. The benchmark is both representative and challenging. All data, code, and evaluation protocols are publicly released, establishing a foundational resource for CFQA research.

Technology Category

Application Category

πŸ“ Abstract
The widespread deployment of large models in resource-constrained environments has underscored the need for efficient transmission of intermediate feature representations. In this context, feature coding, which compresses features into compact bitstreams, becomes a critical component for scenarios involving feature transmission, storage, and reuse. However, this compression process introduces inherent semantic degradation that is notoriously difficult to quantify with traditional metrics. To address this, this paper introduces the research problem of Compressed Feature Quality Assessment (CFQA), which seeks to evaluate the semantic fidelity of compressed features. To advance CFQA research, we propose the first benchmark dataset, comprising 300 original features and 12000 compressed features derived from three vision tasks and four feature codecs. Task-specific performance drops are provided as true semantic distortion for the evaluation of CFQA metrics. We assess the performance of three widely used metrics (MSE, cosine similarity, and Centered Kernel Alignment) in capturing semantic degradation. The results underscore the representativeness of the dataset and highlight the need for more refined metrics capable of addressing the nuances of semantic distortion in compressed features. To facilitate the ongoing development of CFQA research, we release the dataset and all accompanying source code at href{https://github.com/chansongoal/Compressed-Feature-Quality-Assessment}{https://github.com/chansongoal/Compressed-Feature-Quality-Assessment}. This contribution aims to advance the field and provide a foundational resource for the community to explore CFQA.
Problem

Research questions and friction points this paper is trying to address.

Evaluating semantic fidelity of compressed features
Quantifying semantic degradation from feature compression
Developing metrics for compressed feature quality assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Compressed Feature Quality Assessment (CFQA) benchmark
Evaluates semantic fidelity using three vision tasks
Proposes refined metrics for semantic distortion analysis
πŸ”Ž Similar Papers
No similar papers found.