🤖 AI Summary
This work addresses the absence of a rate-distortion theory tailored to machine vision tasks. It pioneers a systematic extension of classical rate-distortion theory to AI-perception scenarios, establishing a joint rate–task-performance optimization framework for downstream vision tasks—including classification, detection, and segmentation. A task-adaptive distortion modeling approach is proposed, along with learnable end-to-end encoders/decoders and a unified training strategy, enabling fundamental trade-offs between bit rate and task performance. Experiments across multiple benchmarks demonstrate state-of-the-art rate–distortion performance: at equivalent task accuracy, the proposed method reduces bit rate by up to 42%, significantly improving coding efficiency and downstream robustness.
📝 Abstract
Recent years have seen a tremendous growth in both the capability and popularity of automatic machine analysis of images and video. As a result, a growing need for efficient compression methods optimized for machine vision, rather than human vision, has emerged. To meet this growing demand, several methods have been developed for image and video coding for machines. Unfortunately, while there is a substantial body of knowledge regarding rate-distortion theory for human vision, the same cannot be said of machine analysis. In this paper, we extend the current rate-distortion theory for machines, providing insight into important design considerations of machine-vision codecs. We then utilize this newfound understanding to improve several methods for learnable image coding for machines. Our proposed methods achieve state-of-the-art rate-distortion performance on several computer vision tasks such as classification, instance segmentation, and object detection.